Devices, Scripting, Technology, windows

100-level 101

By my semi-quasi scientific reasoning, I estimate that this scenario has occurred in my presence approximately 34.75 times in the past 10 years. That number could be completely fictitious, but you have to prove me wrong, so good luck.

Anyhow, it happened yesterday, and today I had to actually apply it again myself, so it reminded me to blabber about it again, here, on my blabber blog. Remember, this is 100-level 101 stuff, so if you start rolling your eyes, I warned you already.

Challenge: You need to confirm a registry key is set on a remote client, RIGHT THIS FREAKING SECOND. The registry key is under one of the users who uses that machine. You only know the following:

  • The machine name
  • The user’s first and last name

Caveats: You are logged onto one of the domain controllers. You do not have Configuration Manager. You only have a keyboard, a mouse, a brain, a pair of eyeballs, and possibly a sleeping dog and angry cat nearby. Nothing else. Clothing is optional.

Workflow:

  • You ping the remote computer (e.g. “DT001”) and it responds with a happy wave and a smile.
  • You open trusty, old, bearded REGEDIT.exe and click File / Connect Network Registry. You enter the computer name (e.g. “DT001”). It tells you to **** off.
  • You apply some wax to your mustache and curl the ends neatly, crack your knuckles and continue. If you don’t have a mustache, use someone else’s for now.
  • Open a PowerShell console
  • Type: Get-Service RemoteRegistry -ComputerName DT001
  • It returns some information, including Status = “Stopped”
  • You attempt to start it: Get-Service RemoteRegistry -ComputerName DT001 | Start-Service. But it tells you to **** off.
  • You crack your knuckles once more and dawn a sinister look, like Daniel Day Lewis in There Will Be Blood
  • Set-Service RemoteRegistry -ComputerName DT001 -StartupType Manual
  • Get-Service RemoteRegistry -ComputerName DT001 | Start-Service
  • So far, so good. Go back to REGEDIT and connect successfully
  • You open HKEY_USERS and see a bunch of SID stuff, like “S-1-5-21-1234567890-0987654321-234234234234-1234, but you don’t know which one is related to the desired user account
  • Your dog reminds you that you are currently logged onto a domain controller.
  • You know the user is “Jimmy Jerkweed”, so you search for him using Get-ADUser -Filter ‘Name -like “*Jerkweed*”‘ | select *
  • You find one with a SID property that matches the registry key names and dive in

The Short Version

  • ping DT001
  • Set-Service RemoteRegistry -ComputerName DT001 -StartupType Manual
  • Get-Service RemoteRegistry -ComputerName DT001 | Start-Service
  • Regedit.exe / Connect Network Registry / DT001
  • Get-ADUser -Filter ‘Name -like “*jerkweed*”‘ | select SID

Way too many times, this would stop at the second bullet (above). The technician would insist that either a firewall, or anti-virus, were blocking access. Or maybe there was a problem with the machine. Not so.

  • By default, the Remote Registry service is disabled. Therefore, it cannot be forced to start, especially remotely.
  • Without this service running, you cannot connect to the registry from a different machine on the network, regardless of your privileges.
  • In most cases, by default, as a user with direct (or indirect) administrative rights on the remote machine, you can change the service startup type property from “disabled” to “manual”, allowing you to then start it, even remotely.
  • When using a Windows workstation, or member server (not a domain controller), you can also run the Get-ADxxxx cmdlets, if you have RSAT installed and enabled. If you don’t, and can’t, you can install the AdsiPS powershell module and do the same using Get-AdsiUser.

Cheers!

Advertisements
Personal, Projects, Technology

ud-cmwt, conferences and doggy poo bags

ud-cmwt

A few people asked what I was blabbering about on Twitter recently that mentioned “ud-cmwt“. I promised I would elaborate, alas, I procrastinate, but here’s the nutshell:

Millions of years ago, before the last ice age melted and uncovered what would become Mitch McConnell and Keith Richards, there was a quasi web-project, oh, never mind. I’m too tired to be stupid. Wait. You’re never too tired to be stupid. I should say, too stupid to be tired.

Anyhow: it’s a revised/revamped/retooled CMWT built on Adam Driscoll’s fantabulously increditastical PowerShell module: UniversalDashboard (Community Edition). It’s at 0.0.5 on PowerShell Gallery, but it’s only a scaffold framework right now. It lets you poke around and list users, groups, computers, packages, applications, os images, and a bunch of SQL information (server, database, tables, etc.) – linked to ConfigMgr, AD, AzureAD and SQL Server, just like the old CMWT was (except for AAD part).

Yes, it’s one more project I’m piling on my pile of over-piled project piles. But it’s really fun. And 0.0.6 will add drill-down searching and detail views like CMWT had. Phase 3 will add manipulation gyration to the extrapolation of interpolation. hmm. Here are some screen shots to wow you with…

Upcoming Conferences

PowerShell Saturday – Raleigh

I’ll be presenting at PowerShell Saturday in Raleigh NC on Saturday, September 21, 2019. Yes, you read correctly: I’m actually getting in front of a large group of people and speaking. Some would say torturing is a better word, but I hope it’s at least mildly entertaining. Tickets are still available, but are going fast, so don’t wait too long!

Aside from me, there are quite a few incredible people presenting at the conference, so please do not make up your mind on my participation. Do it for the awesome stuff you’ll learn from the other awesome people. But if you attend my session, please stop by afterwards and say hello?

MMS Jazz – New Orleans

I will also be (tentatively) speaking at two (2) , yes, holy shit, at MMS Jazz Edition in New Orleans, November 10 – 14, 2019. The schedule is posted, but may change as things solidify. Tickets are still available, but not for much longer. Just take a look at the list of speakers and you’ll be looking for where your jaw fell off, right next to mine. I’m still pinching myself.

Doggy Poo Bags

So, one of other other human pets walking their dog masters around the neighborhood started bagging their master’s poo and leaving the bags where they tied them up. I’m not kidding. Little green bags all over the place.

One of the neighbors assumed it was me, but I kindly corrected his malformed perceptions. I showed him my custom doggy poo bags I purchased in a crate-sized package from Amazon. No one else uses them in my hood. I’m saving the planet, one pile at a time. Of course, they go into a landfill, sealed in plastic, for the next 5000 years, just like nature intended.

Podcasting

I’ve been toying with podcasting, solo format for now. I ran out of space on SoundCloud, so when I get time, I plan on finding a more suitable place to store and host them from. As if you don’t already have enough noise in your life. I’ll add some more.

Thank you for reading!

Projects, Scripting, Technology

Building Blocks: PowerShell module rollbacks

What is a “roll back” you ask? (I know you didn’t really ask, but for those that wanted to ask…) in general terms, it is rolling back to a previous version of some piece of software, in this case a PowerShell module. For example, going from module version 1.2 back to 1.1.

A customer asked me, “What’s the best way to roll back to a specific version of a PowerShell module?

I said, “As a consultant, the answer is ‘it depends’“, ha ha! Just kidding. Well, kind of kidding. Okay, not really kidding, but all kidding aside… The process usually follows this workflow (assuming this is a public module, which you do not own/maintain):

Rollback Scenarios

Reminder: Because this happens so often, it’s like struggling with a USB plug – – – Whenever you are working with installing, updating or removing PowerShell modules, open the PowerShell console using “Run as administrator”. Alternatively, you can manage them under your “user” scope alone.

For the following examples, I’m using the PowerShell module: dbatools. There is nothing wrong (as far as I’ve seen) with the latest version, but I’m going to roll it back to a previous version to demonstrate my incoherent blabbering.

Scenario A – Old Version Still Installed

If the PowerShell module was updated using Update-Module, there’s a good chance that the prior version(s) are still installed on the local system. To confirm, use Get-Module <modulename> -ListAvailable.

In this example, I have two (2) versions installed (1.0.15 and 1.0.20). I want to uninstall the newer version (1.0.20) and leave only 1.0.15 installed.

I would normally use Uninstall-Module <modulename> -RequiredVersion <bad-version> or in this example: Uninstall-Module dbatools -RequiredVersion 1.0.20, as shown below.

You may get an error saying another module is “dependent” upon the one you’re trying to remove (see example above). If so, make note of the dependent module, uninstall it, then try the first uninstall again. Once you have the version you want, you can reinstall the dependent module (assuming it’s not actually dependent on the version you just uninstalled, doh!!)

After all this fuss, it now shows dbatools version 1.0.15 installed.

Scenario B – PS Gallery

If only the newest version (the bad version) is installed, check to see if the prior version is still available on the PowerShell Gallery. You can do this using Find-Module <modulename> -AllVersions.

Warning: dbatools lists pretty much every version since inception, so the list is very long.

If the results show the version you want/need, simply uninstall the current module and install the specific version from the PS Gallery.

Tip: This method supports rolling back to as far back as the author maintains in the PS Gallery. If they chose to unlist a particular version that you need, this won’t work, and you’re on to scenario C below.

Scenario C – GitHub Repository

If the prior version you need is no longer available on the PowerShell Gallery, the next place to look is on the “Project site” or GitHub repository. In some cases, this isn’t possible, but thankfully, it’s more often available than not.

Go to the GitHub site, open the repository, confirm the version, and the branch, and click the Clone or Download button, then click Download Zip. Extract the ZIP file contents somewhere.

Keep in mind that the folder structure provided by the GitHub ZIP download is not the same as what PowerShell modules require in the default path environment. Use the following command to display the current module path…

(Get-Module <name> -ListAvailable).Path

Note the version number in the path string. You will need to “spoof” this to match the version you downloaded so the PowerShell environment will properly recognize it. For this example, just pretend it shows “…\1.0.20\…” and “…\1.0.15\…” doesn’t exist.

Navigate to the parent folder (e.g. the module name itself, “dbatools”), such as “c:\Program Files\WindowsPowerShell\Modules\dbatools”

Create a new sub-folder for the version you want (i.e. “1.0.15”)

Open the ZIP file, drill-down under the first root-level folder, to see the main files and folders. Extract the contents from there into that new module path folder on your hard drive.

IMPORTANT: This extract/copy process will place more than is really needed, but it’s okay. PowerShell will only load what it needs and ignore what it doesn’t need.

If there is not GitHub (or other) repository available, or the version is no longer available for some reason, you’re on to scenario D below.

Scenario D – F**k it

That’s right, just F**K it. Yell out obscenities, and claim you have Tourette syndrome. After you calm down, search for alternative sources:

  • Other systems which still have the older module version installed (copy the folders/files)
  • System or file backups which you could pilfer to get the older module files back. Use the $env:PATH variable to guide you towards the folder and file location(s).
  • Call a friend who might have an older version installed somewhere, and threaten them with fresh doughnuts or cold beer, until they give in.

If that doesn’t work, go to a gym and beat up a punching bag for an hour.

Meanwhile

As it turned out, they’d built a PowerShell-based automation process using internal scripts, and modules available on the PowerShell Gallery. Nothing unusual about that; it is what it was intended for. However, they had also built-in an automatic “update all modules” task at the beginning of their script.

This is a major no-no, because it violates basic “change control” rules. Every change (emphasis on “every“) should (read: must) be tested prior to applying in a production environment. Making the update process part of the production workflow automatically breaks that rule. And in their case, the module they were using was updated to deprecate a parameter on a particular function, which crashed their particular process.

Be careful not to confuse what I’m saying with automated CI/CD pipelines (dev > test > prod). This is merging external changes into a production environment; skipping dev and test entirely. In a nutshell, if you follow standard change control practices, you should rarely, if ever, encounter this situation.

Long story short (like I’m any good at short stories), they couldn’t locate a local copy of the older version and didn’t have a suitable backup to search, but the older version of the module was available in PS Gallery, so they went with scenario B.

Then the angry pack of wolves climbed in through the bedroom window in the middle of the night and ate every single one of them. Oh wait, wrong story…

And they lived happily ever after. The end.

Cloud, Scripting, Technology

Building Blocks: GitHub Issues via PowerShell

The PowerShell module “PowerShellForGitHub” contains a powerful collection of functions to let you interact with, and manage, your GitHub goodies. (Note: read the Configuration section carefully before using). I won’t repeat the installation and configuration part since they already took care of that just fine.

After playing around with it, I found one useful way to leverage this is to query the open issues for my repos, and feed selected information to other things like e-mail, Teams, and so forth. Since it’s just providing a pipeline of information, you can send it off anywhere your mind can imagine.

#requires -modules PowerShellForGitHub
function Get-GitHubRepoIssues {
  [CmdletBinding()]
  param (
    [parameter(Mandatory=$True, HelpMessage="The name of your repository")]
    [ValidateNotNullOrEmpty()]
    [string] $RepoName,
    [parameter(Mandatory=$False, HelpMessage="GitHub site base URL")]
    [ValidateNotNullOrEmpty()]
    [string] $BaseUrl = "https://github.com/skatterbrainz"
  )
  try {
    $issues = Get-GitHubIssue -Uri "$BaseUrl/$RepoName" -NoStatus |
      Where-Object {$_.state -eq 'open'} | 
        Sort-Object Id |
          Select Id,Title,State,Labels,Milestone,html_url
    $issues | % {         
      $labels = $null         
      if (![string]::IsNullOrEmpty($_.Labels.name)) {
        $labels = $_.Labels.name -join ';'
      }
      [pscustomobject]@{
        ID     = $_.Id
        Title  = $_.Title
        State  = $_.state
        Labels = $Labels
        Milestone = $_.milestone.title
        URL    = $_.html_url
      }
    }
  }
  catch {
    Write-Error $Error[0].Exception.Message
  }
}

Sample output…

So, if you have a GitHub account with active repositories and issues, you might be able to glue some cool things together using PowerShell. If you have a cool example, share it in the comments below and I’ll be happy to share it on Twitter as well.

Cheers!

System Center, Technology

Support Requests – 2017 Flashback

I was just talking with someone about how “times have changed” just since 2017. Then I found an old email which had a list of cases I was working on around Q1-17 (former employer). Compared to then, 2019 has been much more calm.

  1. ConfigMgr client push account not having permissions on the remote devices.
  2. Over-zealous Antivirus settings getting in the way (McAfee) of ConfigMgr client installations.
  3. Network admins added/changed subnets without telling SCCM admins (site boundary updates)
  4. Using separate accounts in trusted AD forests, rather than a central trusted account, and the passwords were out of sync.
  5. Another team installed a 3rd party help desk product on the SQL host as SCCM uses, and didn’t tell them it hogs most of the available memory and violates the terms of the ConfigMgr/SQL license.
  6. After suggesting the use of an isolated IP subnet and dedicated DP for a central imaging workbench, the server admin team instead added a 2nd NIC to both the SCCM primary site server and a different DP, on different subnets, one without a gateway, and didn’t tell the SCCM anyone else.
  7. IT staff enrolled several Surface Book’s with EMS/Intune, and then removed the Intune client and installed the SCCM client and then opened a support ticket about why the client no longer shows as “managed” in Intune.  Microsoft investigated, explained and closed the request (as they should have).  The customer argued to keep the request open.  I was brought in to help explain why it should be closed (that alone took 2 days).
  8. Primary site server has CrowdStrike, Symantec EP, and Malware Bytes agents installed, all are active, and none have ConfigMgr exclusions. Long day.
  9. Client Push installation has custom settings which set the default MP to one that was removed from the environment years ago.
  10. Network team re-assigned subnets during an office relocation.  No one was notified to update AD sites and subnets or ConfigMgr (site boundaries).
  11. DNS scavenging was turned off, with DHCP lease duration of 3 days and 50% of devices roam around the campus every day or two.

databases, Scripting, System Center, Technology

What Not to Do With ConfigMgr, 1.0.1

[note: this post has been sitting in my drafts folder for over a year, but recent events reminded me to dust it off and post it]

One of my colleagues, the infamous @chadstech, sent a link to our team, to the slide deck from the Channel9 session (MS04) “30 Things you should never do with System Center Configuration Manager” by @orinthomas and @maccaoz. If you haven’t seen (or read) it already, I strongly recommend doing so first.

It’s from 2016, so even though it’s a few years old now, it still holds up very well in mid 2019. However, everyone who’s ever worked with that product knows that the list could become a Netflix series.

This blog post is not going to repeat the above; instead, append the list with some things I still see in a variety of environments today. Things which really should be nipped in the bud, so to speak. Baby steps.

Using a Site Server like a Desktop

Don’t do it. Install the console on your crappy little desktop or laptop and use that. Leave your poor server alone. Avoid logging into servers (in general) unless you REALLY need to perform local tasks, and that’s it. Anything you CAN do remotely, should be done remotely.

If installing/maintaining the ConfigMgr console is your concern: forget that. The days of having to build and deploy console packages are gone. Install it once, and let it update itself when new versions are available. Kind of like Notepad++. Nice and easy.

Why? Because…

  • Using a server as a daily desktop workspace not only drags on resources and performance.
  • It creates a greater security and stability risk to the environment.
  • The more casual you are with your servers, the sloppier you’ll get and eventually you’ll do something you’ll regret

Whatever your excuse has been thus far, stop it.

Anti-Virus Over-Protection

Even in 2019, with so many tools floating about like Symantec, McAfee, Sophos, CrowdStrike, and so on, when I ask if the “exclusions” are configured to support Configuration Manager, I often get a confused look or an embarrassing chuckle. Gah!!! Chalkboard scratch!

There are several lists of things to exclude from “real-time” or “on-demand” scanning, like this one, and this one. Pick one. Failing to do this VERY often leads to breaks in processes like application deployments, software updates deployments, and policy updates.

Also important: with each new release of Configuration Manager, read the release notes and look for new folders, log files, services or processes that may be introduced. Be sure to adjust your exclusions to suit.

Ignoring Group Policy Conflicts

Whatever you’re doing with regards to GPO settings, make damned sure you’re not also doing the same things with Configuration Manager. The two “can” be combined (in rare cases) to address a configuration control requirement, and you can sew two heads on a cow, but that doesn’t mean it’s the best approach.

Pick one, or the other, only. If you have WSUS settings deployed by GPO, and are getting ready to roll out Software Updates Management via Configuration Manager, stop and carefully review what the GPO’s are doing and make adjustments to remove any possible conflicts.

And, for the sake of caffeine: DOCUMENT your settings wherever they live. GPO’s, CI’s or CB’s in ConfigMgr, scheduled tasks, whatever. DOCUMENT THEM! Use the “Comments” or “Description” fields to your advantage. They can be mined and analyzed easily (take a look at PowerShell module GPODOC for example / shameless plug).

One-Site-Fits-All Deployments

I’ve seen places that only use packages, or only use Task Sequences, or only use script wrapping, or only repackage with AdminStudio (or some alternative). That’s like doing every repair job in your house or apartment with a crowbar.

There’s nothing wrong with ANY means of deploying software as long as it’s the most efficient and reliable option for the situation. Just don’t knee-jerk into using one hammer for every nail, screw, and bolt you come across.

Pick the right tool or method for each situation/application. Doing everything “only” one way is ridiculously inefficient and time-wasting.

Sharing SQL Instances

The SQL licensing that comes with a System Center license does not permit hosting third-party products. Not even your own in-house projects, technically speaking. You “can” do it, but you’re not supposed to.

What that means is, when you run into a problem with the SQL Server side of things, and you call Microsoft, and they look at it and see you have added a bunch of unsupported things to it, you’ll likely get the polite scripted response, “Thank you for being a customer. You appear to be running in an unsupported configuration. Unfortunately, we can’t provide assistance unless you are running in a supported configuration. Please address this first and re-open your case, if needed, for us to help? Thank you. Have a nice day. Bye bye now.

And, now, you’re facing an extended duration of what could have been a simple problem (or no problem at all, since your third-party app might be the problem).

Configuration Manager is extremely demanding of it’s SQL resources. Careful tuning and maintenance is VERY VERY VERY often the difference between a smooth-running site, and an absolute piece of shit site. I can’t stress that enough.

Leeching SQL Resources

Some 3rd party products, who I’m advised not to name for various legal reasons, provide “connection” services into the Configuration Manager database (or SMS provider). Attaching things to any system incurs a performance cost.

Before you consider installing a “trial” copy of one of those in your production environment, do it in a test environment first. Benchmark your environment before installing it, then again after. Pay particularly close attention to what controls that product provides over connection tuning (polling frequency, types of batch operations, etc.).

And, for God’s sake (if you’re an atheist, just replace that with whatever cheeseburger or vegan deity you prefer), if you did install some connected product, do some diagnostic checking to see what it’s really doing under the hood.

And just as important: if you let go of the trial (or didn’t renew a purchased license) – UNINSTALL that product and make sure it’s sticky little tentacles are also removed.

Ignoring Backups

Make sure backups are configured and working properly. If you haven’t done a site restore/recovery before, or it’s been a while, try it out in an isolated test environment. Make sure you understand how it works, and how it behaves (duration, results, options, etc. )

Ignoring the Logs

Every single time I get a question from a customer or colleague about some “problem” or “issue” with anything ConfigMgr (or Windows/Office) related, I usually ask “what do the logs show?” I’d say, on average, that around 80% of the time, I get silence or “hold on, I’ll check”.

If you ask me for help with any Microsoft product or technology, the first thing I will do is ask questions. The second thing I will do is look at the appropriate logs (or the Windows Event Logs).

So, when the log says “unable to connect to <insert URL here>” and I read that, and try to connect to same URL and can’t, I will say “Looks like the site isn’t responding. Here’s my invoice for $40,000 and an Amazon gift card”. And then you say “but I could’ve done that for free?!” I will just smile, and hold out my greedy little hand.

Keep in mind that the server and client logs may change with new releases. New features often add new log files to look at.

Check the logs first.

Ignoring AD: Cleanups

Managers: “How accurate is Configuration Manager?”

Answer: “How clean is your environment?”

Managers: (confused look)

If you don’t have a process in place to insure your environment is maintained to remove invalid objects and data, any system that depends on that will also be inaccurate. It’s just a basic law of nature.

Step 1 – Clean up Active Directory. Remove accounts that no longer exist. Move unconfirmed accounts to a designated OU until verified or removed. This process is EASY to automate, by the way.

Step 2 – Adjust ConfigMgr discovery method settings to suit your environment. Don’t poll for changes every hour if things really only change monthly. And don’t poll once a month if things really changes weekly. You get the idea. Just don’t be stupid. Drink more coffee and think it through.

Step 3 – I don’t have a step 3, but the fact that you actually read to this point brings a tear to my eyes. Thank you!

Ignoring AD: Structural Changes

But wait – there’s more! Don’t forget to pay attention to these sneaky little turds:

  • Additions and changes to subnets, but forgetting to update Sites and Services
  • Changes to domain controllers, but not updating DNS, Sites and Services or DHCP
  • Changes to OUs, but forgetting to update GPO links
  • All the above + forgetting to adjust ConfigMgr discovery methods to suit.

Ignoring DNS and DHCP

It’s never DNS!“, is really not that funny, because it’s very often DNS. Or the refusal to admit there might be a problem with DNS. For whatever reason, many admins treat DNS like it’s their child. If you suggest there might be something wrong with it, it’s like a teacher suggesting their child might be a brat, or stupid, or worse: a politician. The other source of weirdness is DHCP and its interaction with DNS.

Take some time to review your environment and see if you should make adjustments to DHCP lease durations, DNS scavenging, and so on. Sometimes a little tweak here and there (with CAREFUL planning) can clean things up and remove a lot of client issues as well.

Check DHCP lease settings and DNS scavenging to make sure they are closely aligned to how often clients move around the environment (physically). This is especially relevant with multi-building campus environments with wi-fi and roaming devices.

Task Sequence Repetition

A few releases back, Microsoft added child Task Sequence features to ConfigMgr. If you’re unaware of this, read on.

Basically, you can insert steps which call other Task Sequences. In Orchestrator or Azure Automation parlance this is very much like Runbooks calling other Runbooks. Why is this important? Because it allows you to refactor your task sequences to make things simpler and easier to manage.

How so?

Let’s say you have a dozen Task Sequences, and many (or all) of them contain identical steps, like bundles of applications, configuration tasks, or driver installations. And each time something needs updating, like a new application version, or a new device driver, you have to edit each Task Sequence where you “recall” it being used. Eventually, you’ll miss one.

That’s how 737 Max planes fall out of the sky.

At the very least, it’s time wasted which could be better spent on other things, like drinking, gambling and shooting guns at things.

Create a new Task Sequence for each redundant step (or group of steps) used in other Task Sequences. Then replace those chunks of goo with a link to the new “child” Task Sequence. Now you can easily update things in one place and be done with it. Easy. Efficient.

Ignoring Staffing

Last, but certainly not least is staffing. Typically, this refers to not having enough of it. In a few cases, it’s too many. If your organization expects you to cover Configuration Manager, and it’s SQL Server aspects, along with clients, deployments, imaging, updates, and configuration policies, AND maintain other systems or processes, it’s time for some discussion, or a new job.

If you are an IT manager, and allow your organization to end up with one person being critical to a critical business operation, that’s foolish. You are one drunk driver away from a massive problem.

An over-burdened employee won’t have time to create or maintain accurate documentation, so forget the crazy idea of finding a quick replacement and zero downtime.

In team situations, it’s important to encourage everyone to do their own learning, rather than depend on the lead “guru” all the time. This is another single point of failure situation you can avoid.

If there’s anyone who knows every single feature, process and quirk within Configuration Manager, I haven’t met them yet. I’ve been on calls with PFE’s and senior support folks and heard them say “Oh, I didn’t know that” at times. It doesn’t make sense to expect all of your knowledge to flow out of one person. Twitter, blogs, user groups, books, video tutorials, and more can help you gain a huge amount of awareness of features and best practices.

That’s all for now. Happy configuring! 🙂

System Center, Technology

7 SCCM Task Sequence Tips

I purposely left out “OSD” in the title, because I see a significant increase in non-OSD tasks being performed with Task Sequences. This includes application deployments, complex configuration sequences, and so on. Whether those could be done more efficiently/effectively using other tools is a topic for another beer-infused, knife-slinging, baseball bat-swinging discussion. Just let me know early-on, so I can sneak out the back door.

Anyhow, this is just a short list of “tips” I find to be useful when it comes to planning, designing, building, testing, deploying and maintaining Task Sequences in a production environment. Why 7? Because it’s supposed to be lucky.

Disclaimer

Are you sitting down? Good. This might be a big shock to you, but I am *not* the world’s foremost expert on Task Sequences, or Configuration Manager. And some (maybe all) of these “tips” may be eye-rolling old news to you. But hopefully, some of this will be helpful to you.

Start Simple!

So often, I see someone jump in and start piling everything into a new Task Sequence at once, and THEN trying it out. This can make the troubleshooting process much more painful and time-consuming than it needs to be. Start with what developers call a “scaffold”, and gradually build on that.

I usually start with the primary task at hand: such as “install Windows 10 bare metal“, test that with only the absolute bare minimum steps required to get a successful deployment. Then add the next-most-important steps in layers and continue on.

However you decide to start, just be sure to test each change before adding the next. It might feel tedious and time-wasting, but it can save you 10 times the hassle later on.

Divide and Conquer

Don’t forget that the latest few builds of ConfigMgr (and MDT btw) support “child”, or nested, Task Sequences. In situations where you have multiple Task Sequences which share common steps, or groups of steps, consider pulling those out to a dedicated Task Sequence and link it where needed. Much MUCH easier to maintain when changes are needed.

Some common examples where this has been effective (there are many more I assure you) include Application Installations, Drivers, Conditional blocks of steps (group has a condition, which controls sub-level steps within it, etc.), and setup steps (detection steps with task sequence variable assignments at the very top of the sequence, etc.)

I’m also surprised how many people are not aware that you can open two Task Sequence editors at the same time, side-by-side, and copy/paste between them. No need to re-create things, when you can simply copy them.

Organize and Label

If you are going to have multiple phases for build/test/deploy for your Task Sequences, it may help to do one (or both) of the following:

  • Use console folders to organize them by phase (e.g. Dev, Test, Prod, and so on)
  • Use a consistent naming convention which clearly identifies the state of the Task Sequence (e.g. “… – Prod – 1.2”)

This is especially helpful with team environments where communications aren’t always optimal (multiple locations, language barriers, time zones, etc.)

Establish a policy and communicate it to everyone, then let the process manage itself. For example: “All you drunken idiots, listen up! From now on, only use Task Sequences with ‘Prod’ in the name, unless you know it’s for internal testing only! Any exceptions to this require you eating a can of bug spray.”

Documentation

Wherever you can use a comment, description, or note, field in anything, you should. This applies to more than ConfigMgr as well. Group Policy Objects and GP settings are rife with not having any explanation about why the setting exists or who created it. Don’t let this mine field creep into your ConfigMgr environment too.

Shameless plug: For help with identifying GPOs and settings (including preferences) which do or don’t have comments, take a look at the GpoDoc PowerShell module, available in the PowerShell Gallery, and wherever crackheads can be found.

The examples below show some common places that seem to be left blank in many (most) organizations I run across.

Other places where documentation (comments) can be helpful are the “Scripts” items, especially the Approval comment box.

Side note: You can query the SQL database view vSMS_Scripts, and check the “Comment” column values to determine what approval comments have been added to each item (or not). Then use the “Approver” column values to identify who to terminate.

Access Control

This is aimed at larger ConfigMgr teams. I’ve seen environments with a dozen “admins” working in the console, all with Full Administrator rights. If you can’t reign that wild-west show in a bit, at least sit down and agree who will maintain Task Sequences. Everyone else should stay out of them!

This is especially important if the team is not co-located. One customer I know was going through a merger (M&A) and, apparently, one group in another country, didn’t like some of the steps in their Windows 10 task sequence, so they deleted the steps. No notifications were sent. It was discovered when the first group started hearing about things missing from newly-imaged devices.

In that case, the things needed were (A) better communications between the two groups, and (B) proper security controls. After a few meetings it was agreed that the steps in question would get some condition tests to control where and when they were enabled.

Make Backups!!!!

Holy cow, do I see a lot of environments where the Backup site maintenance task isn’t enabled. That’s like walking into a biker bar wearing a “Bikers are all sissies!” t-shirt. You’re just asking for trouble.

Besides a (highly recommended) site backup, however, it often pays dividends to make what I call “tactical backups”. This includes such SUPER-BASIC things as:

  • Make a copy of your production task sequences (in the console) – This is often crucial for reverting a bunch of changes that somehow jacks-up your task sequence and you could spend hours/days figuring out which change caused it. Having a copy makes it really easy (and fast) to recover and avoid lengthy impact to production
  • Export your product task sequences – Whether this is part of a change management process (vaulting, etc.) or just as a CYA step, it can also make it easy to recover a broken Task Sequence quickly.

Either of these are usually much less painful than pulling from a site backup.

As a double-added precaution, I highly/strongly recommend that anytime you intend to make a change to a production task sequence, that you make a copy of it first. Then if your edits don’t work, instead of spending hours troubleshooting why a revert attempt isn’t actually reverting, you can *really* revert back to a working version.

Don’t Overdo It

One finally piece of advice is this: Just because you get comfortable using a particular hammer, don’t let this fool you into thinking everything is a nail. Task Sequences are great, and often incredibly useful, but they’re not always the optimal solution to ever challenge. Sometimes it’s best to stick with a very basic approach, like a Package, Application, or even a Script.

I’ve worked with customers who prefer to do *everything* via a Task Sequence. Even when it was obvious that it wasn’t necessary. The reason given was that it was what they were most familiar with at the time. They have since relaxed that default a bit, and saved themselves quite a bit of time. That said, Task Sequences are nice and should always be on your short list of options to solve a deployment need.

Summary

I hope this was helpful. If not, you can also print this out, and use it as a toilet bombing target. Just be sure to load up on a good Mexican lunch before you do. Cheers!