Devices, Projects, Scripting, System Center, Technology, windows

More Paint Fumes with Automated Device Naming using ConfigMgrWebService and PowerShell

Update-3: Added example for using the GetCMFirstAvailableNameSequence() function.

Update-2: fixed spelling of Nickolaj. Changed coffee brand. Moved lawn rake.

Updated-1 11/6: Corrected a few bone-headed mistakes. (1) ConfigMgrWebService author is Nickolaj Andersen (sorry about that), but I still need to let Maurice finish his dinner. (2) There is a function provided for querying the next available incremental name in a numbered sequence, from CM not AD, GetCMFirstAvailableNameSequence. So at least some of this post still holds up. Geez, this coffee just isn’t strong enough anymore.

As if there aren’t enough types of paint to inhale already, much like the wide variety of methods for naming devices, here’s one that was interesting and fun to automate. This is for environments where devices need to be named in an numerical incremental fashion.


  • Devices use a standard prefix based on form-factor (e.g. Chassis Type) such as “WS”, and “LT”.
  • Unique portion is appended as an incremental value (e.g. 001, 002, …)
  • Examples: “WS001”, “LT003”


  • How to determine the next available name in the Active Directory domain, during an OSD task sequence, provided only with the prefix portion such as “WS”.
  • Solution should “fill-in” missing gaps in names found in AD, such as 001, 002, 004 (should return 003 as next available).


  • Obtain device characteristics from WMI using PowerShell
  • Obtain the next available account in AD by querying the ConfigMgrWebService using PowerShell.

I’ve seen quite a few ways to address this, from importing modules and using ADSI queries, etc., all of which involve PowerShell (and why not?!).

I’ve spent the better part of the last year helping customer move to “Modern Driver Management” using SCConfigMgr’s solution set (click here for more details), so I remembered ConfigMgrWebService also provides more query functions besides GetCMDrivePackageByModel.

The GetADComputer function looks closest to what I wanted, but it only returns one result per query (ask for a computer, get back one computer). No option that I could see for getting a list, much less a filtered list, of computer names in AD. I could have just bothered Maurice for like the 42nd time, but the poor guy needs to enjoy dinner in peace too. And there’s a very easy way to leverage that function as-is.

The example (below) takes the web service URI, the web service “secret key”, and naming pattern parameters, and loops through AD starting with a base index name of 1 (or 001). I set an arbitrary limit of 100 instances per prefix, but this can all be modified to whatever is needed.

param (
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $URI,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $SecretKey,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $Prefix,
  [parameter()][ValidateRange(3,15)][int] $NameLength = 4
try {
  Write-Verbose "connecting to web service at $URI"
  $ws = New-WebServiceProxy -Uri $URI -ErrorAction 'stop'
  for ($index = 1; $index -lt 100; $index++) {
    $nextname = $Prefix + $([string]$index).PadLeft($NameLength - $($Prefix.Length), "0")
    Write-Verbose "checking name: $nextname"
    $found = ($ws.GetADComputer($SecretKey, $nextname)).SamAccountName
    if (![string]::IsNullOrEmpty($found)) {
      Write-Verbose "name exists: $nextname"
    else {
      return $nextname
  Write-Output "no names for this prefix available from 1 to 100"
catch {
  Write-Error $_.Exception.Message 

The -NameLength parameter controls the overall length of the names to check and return value. Again, I set an arbitrary range of 3 to 15 characters (example: “WS1” to “WS0000000000001”). Here’s the results when I run this against my AD domain which already has devices WS01, WS02, WS03, WS09 and WS10…

It’s pretty quick. In my tests it runs about 20 to 30 names per second.

(updated) Using the built-in GetCMFirstAvailableNameSequence() function, it would be like the following…

try {
  Write-Verbose "connecting to web service at $URI"
  $ws = New-WebServiceProxy -Uri $URI -ErrorAction 'stop'
  $nextSuffix = $ws.GetCMFirstAvailableNameSequence($SecretKey, $NameLength, $Prefix)
  $nextname = "$Prefix$nextSuffix"
  Write-Verbose "checking name: $nextname"
  return $nextname
catch {
  Write-Error $_.Exception.Message 

To directly assign the output to a task sequence variable, like OSDComputerName, just wrap the code in a function block (i.e. Get-NextADDeviceName), and append a few more lines to assign the output…

$newname = Get-NextADDeviceName
try {
  $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
  $tsenv.Value("OSDComputerName") = $newname
catch {
  Write-Warning "not running in a task sequence at the moment"

Putting all this together it looks like the following, but I’m sure you will see it can be improved a great deal…

param (
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $URI,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $SecretKey,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $Prefix,
  [parameter()][ValidateRange(3,15)][int] $NameLength = 4
fuction Get-NextADDeviceName {
  try {
    Write-Verbose "connecting to web service at $URI"
    $ws = New-WebServiceProxy -Uri $URI -ErrorAction 'stop'
    for ($index = 1; $index -lt 100; $index++) {
      $nextname = $Prefix + $([string]$index).PadLeft($NameLength - $($Prefix.Length), "0")
      Write-Verbose "checking name: $nextname"
      $found = ($ws.GetADComputer($SecretKey, $nextname)).SamAccountName
      if (![string]::IsNullOrEmpty($found)) {
        Write-Verbose "name exists: $nextname"
      else {
        return $nextname
    Write-Output "no names for this prefix available from 1 to 100"
  catch {
    Write-Error $_.Exception.Message 
# call the function and get the next available name
$newname = Get-NextADDeviceName
Write-Verbose $newname
try {
  $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
  $tsenv.Value("OSDComputerName") = $newname
catch {
  Write-Warning "not running in a task sequence at the moment"

From here, it’s simply a matter of adding it to the task sequence, using Run PowerShell Script. If you need an example for querying the chassis type, check out my earlier post on computer naming.


databases, Scripting, System Center, Technology

Let’s Take a PowerShell Dump with ConfigMgr!

Let’s pull some inventory summary data out of Configuration Manager, just like a TSA agent with Latex gloves on! We’ll be using the following goodies:

  • PowerShell
  • PowerShell Module: dbatools
  • PowerShell Module: importexcel
  • A JSON file
  • A Configuration Manager site with a SQL Server database
  • A user account which can access the database and successfully run queries, has access to install PowerShell modules, and has plenty of coffee and bad jokes.


This little project demonstrates one example of exporting information from a Configuration Manager SQL Server database, directly to Microsoft Excel, without even having Microsoft Excel installed. You will need Microsoft Excel eventually, in order to view the results, but that could be done years later, and by then, I’ll be too old to remember any of this.

For this demonstration I used the following environment:

  • Configuration Manager 1906 (or later)
  • SQL Server 2016 (or later)
  • Windows Server 2016 (or Windows 10, 1803 or later)
  • Windows PowerShell 5.1

Staring into the Abyss

It starts with a JSON file, which contains the parameters that include the dataset names, SQL queries, and specific output values. You could store this information in almost any desired format, I chose JSON because it’s fun to say out loud when you work next to a guy named Jason, and because my colleague Ryan nags the shit out of me to stop using XML. There. Are you happy now? 🙂

Here’s a snippet of the config.json file. If it doesn’t display correctly, I’m blaming Ryan, and WordPress. But mostly Ryan. I don’t know why.

    "Summary": {
        "query": "SELECT ( SELECT COUNT(*) FROM v_R_SYSTEM ) AS Devices,( SELECT COUNT(*) FROM v_R_USER ) AS Users",
        "properties": "Devices,Users"
    "OperatingSystems": {
        "query": "select distinct Caption0 as OSName,BuildNumber0 as BuildNum, case when (BuildNumber0 = 18363) then '1909' when (BuildNumber0 = 18362) then '1903' when (BuildNumber0 = 17763) then '1809' when (BuildNumber0 = 17134) then '1803' when (BuildNumber0 = 16299) then '1709' when (BuildNumber0 = 15063) then '1703' when (BuildNumber0 = 14393) then '1607' when (BuildNumber0 = 10586) then '1511' when (BuildNumber0 = 10240) then '1507' else '' end as Build, CSDVersion0 as SvcPack, COUNT(*) as Devices from v_GS_OPERATING_SYSTEM group by Caption0,BuildNumber0,CSDVersion0 order by Caption0,BuildNumber0",
        "properties": "OSName,BuildNum,Build,SvcPack,Devices"
    "Models": {
        "query": "select distinct Manufacturer0 as Manufacturer, Model0 as Model, Count(*) as Devices from dbo.v_GS_COMPUTER_SYSTEM group by Manufacturer0,Model0 order by Manufacturer0,Model0",
        "properties": "Manufacturer,Model,Devices"
    "Disks": {
        "query": "select distinct ld.SystemName0 as Name, ld.Size0 as Capacity, ld.FreeSpace0 as FreeSpace, cs.Model0 as Model, case when (FreeSpace0 < 20000) then 'No' else 'Yes' end as Ready from v_GS_LOGICAL_DISK ld inner join v_GS_COMPUTER_SYSTEM cs on cs.ResourceID = ld.ResourceID where ld.DeviceID0 = 'C:' order by ld.SystemName0",
        "properties": "Name,Model,Capacity,FreeSpace,Ready"
    "Memory": {
        "query": "select sys.Name0 as Name,pm.ResourceID,pm.Capacity0 as Memory from v_GS_PHYSICAL_MEMORY pm inner join v_R_SYSTEM sys on pm.ResourceID = sys.ResourceID group by sys.Name0,pm.ResourceID,pm.Capacity0 order by sys.Name0",
        "properties": "Name,ResourceID,Memory"
    "Software": {
        "query": "select distinct ARPDisplayName0 as ProductName,ProductVersion0 as Version,Publisher0 as Publisher, ProductCode0 as ProductCode, COUNT(*) as Installs from v_GS_INSTALLED_SOFTWARE_CATEGORIZED where (LTRIM(ARPDisplayname0) <> '') and (SUBSTRING(ARPDisplayName0,0,2) <> '..') group by ARPDisplayName0,ProductVersion0,Publisher0,ProductCode0 order by ARPDisplayName0,ProductVersion0",
        "properties": "Installs,ProductName,Version,Publisher,ProductCode"
    "ADSites": {
        "query": "select distinct sys.AD_Site_Name0 as ADSite, COUNT(*) as Devices from v_R_SYSTEM as sys group by AD_Site_Name0 order by AD_Site_Name0",
        "properties": "ADSite,Devices"
    "Gateways": {
        "query": "select distinct DefaultIPGateway0 as Gateway, COUNT(*) as Devices from v_GS_NETWORK_ADAPTER_CONFIGURATION where DefaultIPGateway0 IS NOT NULL group by DefaultIPGateway0 order by Devices desc",
        "properties": "Gateway,Devices"    
    "DistPoints": {
        "query": "select Servername,SMSSiteCode as Site,Description,Type,IsPXE as PXE,IsDoincEnabled as DOINC, IsBITS as BITS,IsMulticast as MCast,IsPullDP as PullDP,IsPeerDP as PeerDP, SslState as SSL,PreStagingAllowed as Prestage from v_DistributionPoints order by ServerName",
        "properties": "Servername,Site,Description,Type,PXE,DOINC,BITS,MCast,PullDP,PeerDP,SSL,Prestage"

The script (below) imports the JSON data as a hashtable-like structure (it’s actually a custom object type, but whatever, it’s JSON so it has to be good). Then we connect to the database instance, punch it in the face, and steal it’s data. By the time it wakes up, we’re in Vegas baby.

#requires -modules dbatools,importexcel
  [parameter(Mandatory)][ValidateLength(3,3)][string] $SiteCode,
  [parameter()][ValidateNotNullOrEmpty()][string] $DbHost = "localhost",
  [parameter()][ValidateNotNullOrEmpty()][string] $DbName = "CM_$SiteCode",
  [parameter()][ValidateNotNullOrEmpty()][string] $ReportPath = "$env:USERPROFILE\documents",
  [parameter()][ValidateNotNullOrEmpty()][string] $ConfigFile = ".\config.json"
$ErrorActionPreference = 'stop'
$XlFile = $(Join-Path $ReportPath "$SiteCode`_Inventory.xlsx")

if (Test-Path $XlFile) { Remove-Item -Path $XlFile -Force }

if (-not(Test-Path $ConfigFile)) {
  Write-Warning "configuration file not found: $ConfigFile"
try {
  $cfg = Get-Content $ConfigFile | ConvertFrom-Json
  $keys = ($cfg.psobject.Properties).Name
  foreach ($key in $keys) {
    $qset  = $cfg."$key"
    $qtext = $qset.query
    $props = $ -split ','
    Invoke-DataExport -Query $qtext -ReportName $key -Properties $props
  Write-Host "processing complete" -ForegroundColor Green
catch {
  Write-Error $_.Exception.Message 

There’s also a nested function (in the same .ps1 file) which does the dirty work of disposing the bodies and running the query truck:

function Invoke-DataExport {
  param (
    [parameter(Position=0)][ValidateNotNullOrEmpty()][string] $Query,
    [parameter(Position=1)][ValidateNotNullOrEmpty()][string] $ReportName,
    [parameter()][string[]] $Properties
  try {
    Write-Host "exporting: $ReportName" -ForegroundColor Cyan
    Invoke-DbaQuery -SqlInstance $DbHost -Database $DbName -Query $Query | Select-Object $Properties | Export-Excel -Path $XlFile -WorksheetName $ReportName
  catch {
    Write-Error $_.Exception.Message

Don’t worry, there’s link to download this roadkill mess into a plastic bag for safe consumption, near the end of this rambling mess.

To use this code, you’ll need to install two (2) PowerShell modules first:

Install-Module dbatools
Install-Module ImportExcel

Let’s walk through this bucket of puke that I might call “code”:

  • First, we check to see if the -ConfigFile parameter points to a real file object, or if the user was high on meth. Then we proceed.
  • Then we look to see if there’s an existing output file, and it there is, we nuke it with an Ebola Nuclear Laser bomb, and move on.
  • Next, we import the configuration data from the JSON file
  • Then, we fetch the main group names as $keys, which are kind of like getting the column names from a table, or something like that. Actually, not at all, but it sounded good at the time.
  • Then, we iterate (loop) through each key and fetch it’s associated sub-properties: “query” and “properties”. The “query” data is the actual SQL select statement. The “properties” are the names of the output columns which I want to export, and the order in which they should be exported (from left to right).
  • Then, submit the key name, query and properties to a the Invoke-DataExport function.
  • The Invoke-DataExport function sends the query to the SQL instance, and passes the results (dataset) over the pipeline to the Select-Object step which filters only the properties it’s told, then sends it to Export-Excel to stuff into the spreadsheet file.
  • Finally, we pour some Write-Host sauce on it, and microwave for about 3 seconds and serve.

Sick of reading this amateurish nausea? Me too. If you just want to download the code and laugh at it, here’s the links:

Remember, there are no guarantees or warranties, life is a mystery, actual results may vary, batteries are not included, and you are 100% liable for any and all bad things that happen anywhere in the universe while playing with this stuff.

If you like it, post a comment. If you don’t like it, post a comment. If you don’t like posting comments, post a comment.


Scripting, System Center, Technology, windows

Thank you, PowerShell!

These are just a few snippets of PowerShell code that helped me out during the past week. There were quite a few others, but these were the easiest examples to show.

UPDATE 1 – Swapped-out Invoke-WebRequest with System.Net.WebClient/DownloadFile in the download script example below, per suggestion of Guy Leech (thanks!)

Adding a User Login to SQL Server

This example adds a domain computer account to a SQL Server instance with “sysadmin” role membership. This example requires the dbatools PowerShell module.

$computer = 'db02'
$account = 'contoso\cm01$'
New-DbaLogin -SqlInstance $computer -Login $account
Get-DbaLogin -SqlInstance $computer -Login $account | Set-DbaLogin -AddRole sysadmin

Adding an Account to a remote Server Administrators Group with Service Logon Rights

This example adds a domain user/service account into the local Administrators group on a remote server, and grants permissions for it to “login as a service” and “logon as a batch”. This could have been accomplished via GPO, but for various reasons that was not an option. This example requires the carbon PowerShell module.

$computer = 'db02'
$account = 'contoso\cm-sql'

$s1 = New-PSSession -ComputerName $computer -Name 's1'
Enter-PSSession -Session $s1
Add-LocalGroupMember -Group 'Administrators' -Member $account
Grant-CPrivilege -Identity $account -Privilege 'SeBatchLogonRight','SeServiceLogonRight'

Creating a Shared Folder Tree

This is basically a template for establishing a “Definitive Source Library”, or centralized content store for things like applications, drivers, scripts, utilities, and so on.

It creates a root-level folder “Sources” on a selected logical drive, sharing it as “Sources$” (hidden share), and then creating more sub-folders below that. The folder structure can be defined in a data file, a GitHub Gist, an explicit variable, or just about anything you prefer. In this example, we used a .txt file.

Example: folders.txt


Example: script code… (be sure to change the $DriveLetter assignment to suit your needs)

param (
  [parameter()][ValidateLength(1,1)][string] $DriveLetter = "E",
  [parameter()][ValidateNotNullOrEmpty()][string]$InputFile = ".\folders.txt",
  [parameter()][ValidateNotNullOrEmpty()][string]$RootFolder = "SOURCES"
$ErrorActionPreference = 'stop'
try {
    $rootPath = "$DriveLetter`:\$RootFolder"
    if (!(Test-Path $rootPath)) { 
        mkdir $rootPath -Force 
        Write-Verbose "created folder: $rootPath"
    $folders = Get-Content $InputFile
    foreach ($folder in $folders) {
        $fpath = Join-Path -Path $rootPath -ChildPath $folder
        if (!(Test-Path $fpath)) {
            mkdir $fpath -Force 
            Write-Verbose "created folder: $fpath"
        else {
            Write-Verbose "folder exists: $fpath"
    $shareName = "$RootFolder`$"
    if ($shareName -notin (Get-SmbShare).Name) {
        Write-Verbose "creating share: $shareName"
        New-SmbShare -Path $rootPath -Name "$shareName"
    Write-Host "finished processing"
catch {
    Write-Error $_.Exception.Message

Downloading Sample Applications

This example downloads installer files directly from vendor web or FTP sites into the appropriate (directed) local folders (which were created by the preceding example). This also uses a data file to specify the folder path and the associated remote URL path. It’s worth noting that I chose a tilde “~” character delimiter to distinguish from the embedded “=” characters in the URL strings, but you could make this a .csv file and wrap the values in double-quotes (e.g. “http://blahblahblahblah&locale=en-us&#8221;)

Example: downloads.txt


Bonus: additional downloads.txt entries to include SCConfigMgr’s Modern Driver Management tools, Ola Hallengren’s maintenance script, Bryan Dam’s WSUS ass-whoopin script, and positive vibes. 🙂


Example: script code

    [parameter()][ValidateNotNullOrEmpty()][string]$InputFile = ".\downloads.txt",
    [parameter()][ValidateNotNullOrEmpty()][string]$rootPath = "H:\SOURCES"
try {
    $apps = Get-Content $InputFile
    foreach ($app in $apps) {
        $appdata   = $app -split '~'
        $appPath   = $appdata[0]
        $appSource = $appdata[1]
        if ($appdata.Count -eq 3) {
            $filename = $appdata[2]
        else {
            $filename  = $($appSource -split '/' | select -Last 1) -replace '%20','_'
        $destPath  = Join-Path -Path $rootPath -ChildPath $appPath
        if (!(Test-Path $destPath)) { 
            mkdir $destPath -Force 
            Write-Verbose "created folder: $destPath"
        $destination = Join-Path -Path $destPath -ChildPath "$filename"
        if (!(Test-Path $destination)) {
            # Invoke-WebRequest -Uri $appSource -OutFile $destination
            # modified per
            [void](New-Object System.Net.WebClient).DownloadFile($appSource, $destination)
        else {
            Write-Verbose "file exists: $destination"
    Write-Host "finished processing"
catch {
    Write-Error $_.Exception.Message 


business, Projects, Technology

Basic Automation Tips

There are many great articles on how to automate things in the IT world. User accounts, devices, software installations, configuration settings, security controls, monitoring, logs, and more. But it seems that when it comes to when you should automate, there’s not quite as much noise being made.

Low-Hanging Fruit

Start with the smallest, but highly-repetitious tasks. Two that I see most often are clearing old log files, and copying files or folders from one place to another. If you’re doing this (or should be) on a recurring schedule, start with that. Knocking out the little stuff is the best approach for two (2) reasons:

  • Quick turn-around. Because they’re less-complicated, they’re often easier to automate.
  • Rapid time savings. Because they’re quicker to automate, you can knock out more of them in less time, getting back more free time to invest in automating the more complex tasks.

A third benefit is momentum. Once you get on a roll automating the less-complicated tasks, it will propel you on to automating more things.

Tracking and Accounting

After you’ve knocked out a few of the most basic/simple tasks, you should pause for a moment to assess what else “could be” automated, and compile a list. For each item on the list, make some notes:

  • Value to You
  • Value to the Organization
  • Time and Cost Savings
  • Expense
  • Complexity

Value to You refers to how important is it to making YOUR job easier or less of a pain. Consider how often you need to redo things due to mistakes. Automating the task will often reduce or eliminate common mistakes.

Value to the Organization refers to how import is this to your employer. Would automating it provide any benefit to them? If so, list those benefits. Consider time savings as one of the benefits (e.g. not waiting on you to manually complete the task)

Time and Cost Savings are THE MOST IMPORTANT note to consider. Even if your employer doesn’t harp on you about spending money to save money, calculate the savings. Consider ((labor rate x time) + (cost of deferred tasks)), where “deferred tasks” are those which have to wait while you complete the first task. Calculate how much time will be recouped by having the task(s) automated.

Expense refers to any costs required to implement the automation. If you need to buy any products or subscriptions, note the cost. You’d be surprised how many tasks require zero expense to implement. It’s usually about time.

Complexity refers to a subjective rating of how difficult the task will be to automate. Don’t think of how difficult learning a scripting language will be. This is only about the task itself. Are there a lot of steps? A lot of if/else conditions to check for? Does it require an inconsistent schedule, or manual approvals along the way?

The following example is from one of my customers. Converting nerd-speak into MBA-speak won the management team over quickly, and his request to devote the time and expense were approved.

To help explain some of these columns: TS/M = time-savings per month, TS/Y = per year. CS/M = cost savings per month, CS/Y = per year. TS/A = time spent to automate (one-time), EX/A = expense incurred by one-time automation. My Value, Org Value, and Complexity are on a scale of 1 to 5. Rate/Hr = labor rate per hour for the person performing the task (only him at the moment).

Garbage In, Garbage Out – Part 1 : Trimming the Fat

An old friend of mine used to say “If you automate a broken process, you can only get an automated broken process“. It’s true. Automation won’t fix a bad process. It’s just a transcription from human to machine. A bad recipe will only produce a bad meal.

Before you start automating anything, break it down into each step to determine which steps are really necessary. Aim to combine steps, split steps, and most importantly: remove steps. Whatever it takes to make the process “correct”. Don’t worry about “easy”, since you’re (hopefully) not building it yet. Get the plan nailed down first, then worry about the execution.

An example I recall was when a customer was following the same routine every month to add computers to an AD security group, which was then updated during discovery by Configuration Manager to add them to a device collection.

As it turned out, the computer names alone were the only unique criteria (workstations with prefix “ACT”). So the AD security group management step was a complete waste of time. The collection query rule simply needed to be updated to filter on the names. A step was eliminated and the process is now more efficient.

However, the most-common task I see that can be eliminated from most environments is device naming (refer to my earlier blog post on this topic). In 99% of cases I’ve seen, the naming process existed simply out of comfort and habit, but provided absolutely no tangible (measurable) value or cost savings to anyone.

The placement of devices in AD OU’s is a close 2nd place in this regard. Unless you need to isolate the devices for GPO targeting or some process that requires the OU location, it’s often unnecessary.

Data Ownership and Liability

If your automation process involves human input at ANY point in the sequence, make sure you place STRONG controls over data input. If you can eliminate human input, do it!

Always use the “authoritative source” for all of your data inputs. If you need employee information, import it directly from the HR system, don’t let someone re-enter it. If another department owns the data you need, have them feed it directly to your process. Don’t forget: It’s not your data. You’re just making use of it.

No matter who provides the data you will use, you still need to validate it. They might let sloppy work get past their QA step, but it might blow up on you later on. If you expect certain types of values, verify them in your code.


Everyone hates that word, synergy. Sales people ruined it years ago. But, it’s an important piece in any project trying to implement automation. Look for ways to show benefits to other groups/departments, not just your own.

You’d be amazed at how much good can happen by getting enthusiastic support from other groups in your organization. They will bring ideas you never expected, and many of them will crossover into processes you use. And, most importantly, the bigger the impact to the organization, the more support you’ll gain from upper management.

Happy Friday! 🙂

Devices, Projects, Scripting, System Center, Technology, windows

A Tiny Web Service for Tiny ConfigMgr Devices

This is just a “proof of concept” used for another project, but I wanted to post it here in case it’s of help to anyone else.

What is it? It’s a web service (REST endpoint) built using Universal Dashboard (Community edition) PowerShell module.

Why? Because it was freaking stupid-ass simple. Like me.


This example is running on the SMS Provider host (in my case, it’s my Primary site server). ALWAYS – ALWAYS – ALWAYS test your scripts using an isolated lab environment, before you introduce it to a production environment. This code example is provided “as-is”, and without any warranty/guarantee explicit or implied, yada-yada, yap yap, etc. batteries not included.

The Code

First, let’s install the module and take care of loading it as-needed.

try {
  if (!(Get-Module UniversalDashboard.Community)) {
    Install-Module UniversalDashboard.Community
  Import-Module UniversalDashboard.Community
catch {
  Write-Error $_.Exception.Message

Next we’ll run a WMI query against the SMS Provider host (often/usually the Primary or CAS host), and return a dataset of all the Devices in the site.

$SiteCode = 'P01'
try {
    $devices = Get-WmiObject -Class "SMS_R_System" -Namespace "root\SMS\Site_$SiteCode" |
        Select ResourceID,Name,Client,ClientVersion,ADSiteName,DistinguishedName,MACAddresses,IPAddresses,LastLogonTimestamp,OperatingSystemNameandVersion,Build
    $Cache:CMDevices = @( $devices )
    $Endpoints = @()

    $Endpoints += New-UDEndpoint -Url 'cmdevices' -Endpoint {
    	$Cache:CMDevices | ConvertTo-Json
    Start-UDRestApi -Endpoint $Endpoints -Port 10001 -AutoReload
catch {
    Write-Error $_.Exception.Message

If you save the above as “cmdevices.ps1” and run it (run as administrator), you should now have a web service named “cmdevices” running on TCP port 10001. To verify, enter the following:


It should return {Name, Port, Running, DashboardService} as an array.

Kick the Tires

Invoke-RestMethod http://localhost:10001/api/cmdevices

You should get an array of data with all of your devices. If you didn’t, I’m blaming you.

To terminate the web service, run the following:

Get-UDRestApi -Name cmdevices | Stop-UDRestApi

Notes and Comments

This is just a simple example of what you can do, but it comes with some caveats:

  • Running heavy web services (complex requests, frequent connections and queries, etc.) adds performance overhead to the server, and Configuration Manager.
  • The SMS Provider is a WMI endpoint, and is not as efficient for some data-shaping techniques as SQL / ADO requests can be. Keep an eye on the performance of the server.
  • This example is not optimized for performance or exception handling. It’s just for a quick example of the possibilities
business, Devices, Technology

The Great Big Tiny Book of Computer Naming Conventions

UPDATED 2019/10/03 – Fixed boneheaded pasting of wrong code snippet for chassis type lookup.

It’s now October 2019, and I’m just back inside my office, after enjoying almost 30 incredible minutes of a relaxing backyard bonfire, until my neighbors called the fire department on me. Bastards. I didn’t even use gasoline this time. I need a 40-acre ranch, or an RPG launcher. I’m pretty sure that previous sentence is why I’m not allowed to win any big lottery jackpots.

Anyhow, I was thinking of how many variations I’ve encountered from customers who insist on naming computers “their way”. Someday, I might actually bust out my Sinatra impression (which sucks) of “My Way”. This is focused primarily on workstations, rather than servers, which often use entirely different drug-induced conventions.

DISCLAIMER: This is in no way whatsoever an endorsement for any particular naming convention, nor any naming convention at all. Do whatever you want with your devices. I prefer gasoline and a torch. This is just an observation of some of the many ways to name computer devices that I’ve run across (and had to deal with).

Key Terms

  • AT = Asset Tag or Serial Number
  • FC = Form factor code (desktop, laptop, tablet, etc.)
  • LC = Location Code
  • UN = User name, UserID
  • AN = Abbreviated User Name
  • DC = Organization Group Code
  • OS = Operating System indicator (e.g. “W10”)
  • VC = Vendor Code
  • RC = (functional) Role Code
  • HV = Hardware Value


Most variations come down to the order by which the chosen key-values are combined, with or without a delimiter (usually a hyphen)

  • [VALUE] + “-” + [VALUE] example: “ATL-123456”
  • [VALUE] + [VALUE] example “D123456”
  • [VALUE] + “-” + [VALUE] + [VALUE] example “NYC-L123456”

Asset Tag and Serial Number (SN)

This is most often queried from the WMI stack using PowerShell or some other script or utility. The most common classes under root/cimv2 are Win32_BIOS and Win32_SystemEnclosure. I have also worked with customers who enforce their own proprietary “asset tag” numbering system, which is generated from some sort of asset inventory system (database output), along with a bar code sticker.

function Get-SerialNumber {
  param ([int] $MaxSerialLen = 15)
  $sn = (Get-CimInstance -Namespace root/cimv2 -Class Win32_SystemEnclosure).SerialNumber
  if ([string]::IsNullOrEmpty($sn) -or $sn -eq 'None') {
    $Script:WarnFlag = $True
    $sn = ""
  Write-Verbose "*** raw serial number is $sn"
  if ($sn.Length -gt $MaxSerialLen) {
    $sn = $sn.Substring(0,$MaxSerialLen)
  Write-Output $sn

Form Factor Code (FC)

This is usually an abbreviation of the form factor general type. Most often these are “L” or “LT” for “laptop”, “D” or “DT” for “desktop” (or “W” or “WS” for “workstation”), and so on. Other variations I’ve seen include “LAP”, “LTP”, “WKST” and whatever else a tube of model glue will conjure up.

Form factor can be derived from various means, depending upon the scale and variety of devices in the environment. For some, it’s easiest to use distinct model names as indicators, such as Dell Latitude vs OptiPlex, or HP ProBook vs ProDesk.

In other cases, it requires querying the ChassisTypes value from WMI class Win32_SystemEnclosure, and dealing with potential array values (docks, dongles, port replicators, etc.). For a list of chassis type values, go here. One example for deriving the form type using PowerShell:

function Get-FormFactorCode {
  param ()
  try {
    $mn = (Get-CimInstance -Namespace root/cimv2 -Class Win32_ComputerSystem).Model
    $ct = ((Get-CimInstance -Namespace root/cimv2 -Class Win32_SystemEnclosure).ChassisTypes)
    # ignore docks/port replicators which often 
    # return an array rather than one value
    if ($mn -match 'Virtual') { $ff = 'V' }
    else {
      if ($ct.Count -gt 1) { 
        $ct = $ct[0]
        Write-Verbose "*** multiple values returned"
      Write-Verbose "*** wmi chassis type = $ct"
      switch ($ct) {
        {($_ -in (3,4,5,6,7,13,15,24,35))} { $ff = 'D' }
        {($_ -in (8,9,10,12,14,18,21))} { $ff = 'L' }
        {($_ -in (17,19,20,22,23,25,26,27,28,29))} { $ff = 'S' }
        {($_ -in (30,31,32))} { $ff = 'T' }
        {($_ -in (11))} { $ff = 'M' }
        {($_ -in (1,2,33))} { $ff = 'O' }
        {($_ -in (34))} { $ff = 'E' }
    Write-Output $ff
  catch {}
} # end of function

Location Code (LC)

This is usually some form of identifier for the physical (geographic) location of the device. This is also predicated (typically) on a stationary device, such as a desktop or server. However, it can also be generalized to things such as city, state, region, or building. Other instances I’ve seen floor number (or name), room number, and row/column indicator for large, tabular-arranged, computers.

  • Country
  • City, County
  • State, Region, Territory
  • Subdivision or Office Park
  • Building
  • Floor Number
  • Room Number
  • Ship, Submarine
  • Aircraft
  • Assembly Line
  • Mile Marker

Organization Group Code (DC)

Also called “Department Code” or “Division Code”. This is usually one of two forms: abbreviation, numeric code. Examples include “ENG” for “Engineering”, and “0132” for “Marketing” or “Project X”, etc.

Another variation I’ve seen is concatenated Department + Division, which can be either, or both (hybrid), such as “AC04” for “Account Department, Division 4”. Other examples include Project Code, Budget Code, and Contract Number.

Username or UserID (UN)

Some organizations prefer to name devices in reference to the assigned user. This is typically the Active Directory “SamAccountName” value. However, in some environments this may be the employee number value.

Abbreviated User Name (AN)

This is usually similar to Username/UserID formats, but may use first initial + last name or some substring portion thereof. For example, user John Smith may be assigned device “JSMITH”.

Operating System

Some organizations include some sort of code to indicate the operating system of the device. The reasons vary, but the most common examples I’ve seen include “W7″, ‘W10”, “LX” (Linux), “MAC” (Apple MacOS), and “UB” (Ubuntu)

Vendor Code (VC)

This is usually some indicator of the hardware manufacturer, such as Acer, Apple, Dell, HP, Lenovo, Microsoft or Samsung. In some environments that use custom-built devices, it may indicate the manufacturer as well.

Role Code (RC)

This one is most-often related to how the device is used. For example “K” for kiosk, “CR” for conference room, “TK” for ticketing, “X” for x-ray viewing, “HV” for HVAC controller, “EC” for elevator control, “LAB” for, well, a lab, and “CAM” for security camera recording.

Hardware Value (HV)

Hardware values (that I’ve seen) are usually derived from something relatively consistent and reliable for uniquely identifying a device. The most common example I’ve seen is MAC address, or UUID. Some organizations leverage this from the packaging label on new shipments, which can be scanned easily to feed into other systems, which in-turn can assist with imaging and naming automation.

Using a MAC address, for example, the name will often remove the colon (“:”) delimiter to shorten the result, and avoid other problems during renaming. For example, with PowerShell using the WMI class Win32_NetworkAdapterConfiguration:

$name = $((Get-WmiObject Win32_NetworkAdapterConfiguration | ? {$_.IPEnabled -eq $True})[0].MACAddress -replace ':', '')

Oh yeah, Servers

I mentioned workstations and servers near the beginning of this mind-dump. Servers are weird. Server admins are weirder, and I should know, because I met one once. Servers typically get named by their function and/or location. This age-old practice is smoking even more Drano with the introduction of (dum-dee-dum-dum-dummm!) the Cloud.

As if it wasn’t bad enough when the names got stupidtarded when virtualization fell on their heads. Admins just barely crawled out of the pile of bodies from that keg party when the cloud came long and hit them in the face with a pipe wrench.

I’ve seen some really cool naming conventions, like comic book characters, movie characters, and some really stupid conventions like comic book characters and movie characters. Others include names of ships and rockets, Greek mythology characters, Roman leaders, US presidents, city names, state names, country names, and names of sports teams.

One of my favorites was a place that intentionally named servers to be confusing AF. Like mixing O’s and 0’s and 1’s and l’s, etc. Something like “O0O1l1ll11OZ2ZZ2Z”. I’m sure that guy wore Kevlar to work every day.

I suppose that if you can’t afford beer, drugs, or a big gun, it could provide badly-needed entertainment.


These are just some of those I’ve seen until now. I won’t be surprised if I run into yet another spin on this.

I’ve heard just about every argument, thought, theory, postulate, axiom and pontification on the subject of naming conventions for computer devices. Ultimately, I prefer using the least complicated option for a given situation, and hopefully one that provides some real benefit (operational efficiency or cost reduction, etc.).

I still get the urge to talk customers out of doing things just because “that’s how we’ve always done it”, and when they’re receptive, it’s nice. When they’re not, I move on to the next task.

I can see how it’s an easy trap to fall into. On the surface it seems intuitively helpful to apply a controlled naming process, but it’s often very challenging to produce any real value from it (cost savings, or revenue increase). Still, it seems like a porch light and a moth.

What are some other conventions you’ve dealt with? What (tangible) benefits did it provide to your organization?

Projects, Scripting, System Center, Technology, windows

Deploy PowerShell Modules with ConfigMgr Task Sequences

UPDATE: 10/24/2019 – Fixed a boneheaded part of this to use Save-Module and replaced the module installer script so that it just reads the module source folder name (and version sub-folder) to automatically place each module in the correct target folder.

I’d like to give a special “thanks!” to Mike Terrell and Donna Ryan for holding my (virtual) hands in the (virtual) darkness, leading up to this. The example Mike sent me is at

Basically, I needed to include some OSD task sequence steps to update the PowerShell Module “PowerShellGet” on newly-imaged Windows 10 devices, for a customer. The background on this is that Windows 10 continues to ship with outdated PowerShell modules, like PowerShellGet 1.0, when 2.2.1 has been out for some time. Other modules often depend on base modules like this as well, so the customer wanted it to be up-to-date.

As a demonstration that it was working, I chose to add another PowerShell module, PSWindowsUpdate, which I’m really finding to be useful (btw), just to confirm this plan worked in a subsequent step in the same task sequence. Not just installing the module, but also calling it from within the task sequence to prove it was working. Let’s bake a cake!

Let’s go!

If you don’t drink, it’s okay to just watch someone else drink. It will achieve the same goal of softening your brain to accept the following information.

Download the Modules

I used c:\psmodules for example only. You can substitute your own favorite low-fat, gluten-free, grass-fed folder for the temporary downloads. Add more Save-Module steps to include other modules if you desire.

mkdir C:\psmodules
Save-Module -Name PowerShellGet -Path c:\psmodules
Save-Module -Name PsWindowsUpdate -Path c:\psmodules

This will result in three (3) module folders under c:\psmodules:

  • PackageManagement
  • PowerShellGet
  • PSWindowsUpdate

NOTE: If you are performing the downloads on a computer which is running an older version of PowerShell than 5.1, you won’t have Save-Module to lean on. In that case, I would quit and look for a job that can afford to provide you with a Windows 10 computer, which has PowerShell 5.1 by default.

Otherwise, if you can’t quit, you can use the following example to download each of the modules. You will then need to extract the .zip file contents into the target folders, preserving the “modulename\version” path structure. Good luck. Keep playing the lottery. Don’t do drugs.

Invoke-WebRequest -Uri -OutFile c:\psmodules\

Prepare the Package Sources

  1. Move the three (3) package folders above to wherever you keep your content sources for ConfigMgr package and application deployments.
  2. Copy the PowerShell Module Installer script (see example 3) into each of the three folders.

Create the ConfigMgr Packages

  1. Open the Configuration Manager admin console
  2. Navigate to Software Library / Application Management / Packages
  3. Create a Package. Enter a Name, Version, Source (UNC) Path, and click Next. I recommend using the Module version for the Package version to keep things crisp and neatly folded for travel.
  4. Choose “Do not create a program“, and click Next.
  5. Repeat for the other Package, or don’t, I don’t care either.
  6. Right-click on the Package, and select “Distribute Content” and choose your favorite Distribution Point, Distribution Point Group or Collection based on your mood today. Wait for the soft little ball to turn green. When it does, yell out “the ball is green!”. Your co-workers will love it. Repeat for the other Package, if you made one.
  7. Get in your car and scroll down to “Operating Systems / Task Sequences” and open your favorite Task Sequence (actually, make a copy of it first, then edit the copy, so you can’t blame me for hosing your imaging environment).
  8. Somewhere near the end, add a new Group, and name it whatever you like. I named my group “PowerShell Modules” because it rhymes with beer.
  9. Optional (but kind of recommended during testing): Enable PowerShell Script Logging in the Task Sequence (link): Click Add / General / Set Task Sequence Variable: Rename the step to (whatever). Variable Name = OSDLogPowerShellParameters / Value = TRUE
  10. Click Add / General / Run PowerShell Script. Rename the step to (whatever). Select the Package you created at step 10, enter the Script name: “Install-ModulePackage.ps1“. Set PowerShell execution policy to “Bypass“, and you can set Parameters to “-Verbose” if you want to add Write-Verbose Easter eggs to amuse your coworkers.
  11. Next, add another step that will flip the mattress and dump all of this onto the floor to see how it wakes up. Click Add / General / Run PowerShell Script. Rename the step to (whatever). Select “Enter a PowerShell Script” this time (not a Package), and click the “Edit Script” button. Paste in the code you see in the “Get-WindowsUpdate” example way way waaaaaay down below (go ahead, I can wait here until you get back).

Example 3 – Install-ModulePackage.ps1

try {
  $MName = Split-Path $PSScriptRoot -Leaf
  Write-Verbose "module: $MName"
  $MVersion = Get-ChildItem -Path $PSScriptRoot -Directory
  Write-Verbose "version: $MVersion"
  $TargetPath = Join-Path -Path $env:ProgramFiles -ChildPath "WindowsPowerShell\Modules\$MName\$MVersion"
  # check if module + version target folder exists
  if (-not (Test-Path $TargetPath)) {
    $SourcePath = Join-Path -Path $PSScriptRoot -ChildPath $MVersion
    Write-Verbose "installing module: $MName $MVersion"
    mkdir $TargetPath -Force -ErrorAction Stop | Out-Null
    xcopy $SourcePath\*.* $TargetPath /s
    $result = 0
  else {
    Write-Verbose "module already installed"
    $result = 1
catch {
  Write-Verbose $_.Exception.Message
  $result = -1
finally {
  Write-Output $result

ConfigMgr Process Examples

Get-WindowsUpdate Script

# code
Import-Module PSWindowsUpdate
Get-WindowsUpdate -AcceptAll -Install -WindowsUpdate -RecurseCycle 3 -IgnoreReboot

More Ideas!

Since PowerShell opens up an inter-galactic can of whoopass, like a steroid-infused, turbo-charged, nitro-methane powered Lego kit, you can inject other modules to some ridonkulous things. For example, modules like Carbon, DbaTools, Posh365 and more, can allow you to query and manipulate things beyond what the built-in task steps provide. Or you can keep eating the blue pill and wait for Agent Smith to show up. Either way is fine.

I hope this was helpful?