This part is important for setting the context of the discussion that follows.  This is another one of my verbose deep-dive rants involving technology and human interference factors. 🙂

So, a close friend and colleague asked me to look over the SOP his new employer uses for imaging Windows computers.  I read through it, and began red-lining parts which could be “improved upon” in some fashion.  Some of the comments looked like this:

  • “Remove this and use a GPO”
  • “Get rid of this prompt. Use the first input to do a remote query for the rest”
  • “Never ask this!  Drive it from other input values.”
  • “Is this a kiosk or conference room desktop or what?”
  • “Mandatory profile and LP settings, and sprinkle some GPO sauce on it.”
  • “Do users get local admin rights?  Please tell me they do not!”

It got me to step back about 200 feet (conceptually speaking, because my house isn’t nearly that big), and apply a broader perspective.

I started to jot down some more, higher-level questions:

  • Do you get a list of serial numbers when the boxes are shipped, or only when they arrive?
  • Do you have a SQL Server instance you can add a custom DB onto?
  • What are the WAN links like?
  • Do you want to push apps to users or have them initiate the installations?
  • How well are you associated with the IT, HR and Finance managers?
  • Do you have a Starbucks nearby?  Never mind.

What I often see in customer environments is not that surprising to most consultants: a tendency to hold onto whatever works, quite often for too long.  It’s easy to blame the IT staff, but actually I find it’s more often a symptom, rather than a root cause.  The actualy root cause being an inefficient operational tactic, driven by a poorly conceived IT services strategy, almost always aimed at “cost cutting” rather than true innovation.

For example, hundreds of hours wasted annually on building, maintaining and troubleshooting dozens of fat Ghost images, initiated from DVD disks or USB thumb drives.  One IT person doing all the imaging, when they could be delegated or automated.

A glaring sign of this is the all-too common over-consolidation of job duties.  The net result of which is a relentless march towards the next fire to extinguish, rather than having the luxury of time and resources to design and build fireproof homes (again, metaphorically speaking).

…a relentless march towards the next fire to extinguish, rather than having the luxury of time and resources to design and build fireproof homes.

After allowing myself to stare off into space, pondering the quagmire this IT world is becoming, I had to pull myself back, like the line of crew members slapping that hysterical passenger in the movie Airplane.


Enough of that pseudo-intellectual babble!  This is serious stuff, right?!  Okay, too much coffee.  Let’s calm down…

Low-Hanging Fruit

If I had to break down and prioritize a list of “what’s wrong” with most imaging processes I encounter today, it might be as follows:

  1. One-Size-Fits-All image libraries
  2. Too many manual steps **
  3. Inefficient Update processes
  4. Poor deployment infrastructure
  5. Inefficient staffing organization

I won’t dive into the first item since it’s already been blogged to death and everyone else has covered it as well as (or better than) I could.

As for item #2, however, I can digress profusely.  Ha ha.  But rather than rationalize and pontificate, I’ll summarize some (hopefully) helpful guidelines.  I’ll save the last 3 for later.

Eliminating Manual Work

  • Define the smallest number of common role configurations:
    • Often referred to as “COE configurations”
      • End-user devices (desktops, laptops), power-user devices (workstations), kiosks and conference room devices, and headless controllers, to name a few.
    • Start with a common baseline OS configuration (if possible)
    • Use run-time logic to steer the imaging process based on conditions.
      • Use task sequences to map and execute the branch configurations (for the above roles)
  • Identify configuration goals
    • Drift and No-Drift items:
      • Which items can “drift” and which should not drift, once placed into production. If it can “drift”, it can be implemented as a “start” or “baseline” and left open for user’s to modify.
    • If the device will be domain-joined:
      • If it cannot drift, remove the manual configuration from the image and use a more efficient (and less cumbersome) technology like Group Policy.
    • If not domain-joined:
      • Lock it down via mandatory profiles, local policy settings, and restricted permissions.
    • In short – GET SHIT OUT OF THE IMAGE if it can be done by other means more efficiently and effectively.  Seriously, so many places have pages of procedures walking the engineer through making a ton of manual configuration changes to the image, which could be done by GPO checkboxes.
  • Identify Organizational Role Mappings
    • That sounds complicated, but it’s not.  This is really about identifying what a configuration is driven by.  If the end-user determines the configuration, then start with that.  If it’s the users’ department or organizational group, start with that.
    • Build a mapping that says “if ___ then ____” (e.g. “If Engineer then (list of properties, configuration settings, apps, etc.)”
    • Take that mapping and devise a means to automate the shit out of it.  That’s right, the shit out of it.  That’s a technical term.

The goals should be:

  • Reduce the image library size (variations)
  • Reduce the image content size (capacity)
  • Increase flexibility (adaptability)
  • Increase automation

Decision Point:  If the organization uses a real “one-size-fits-all” configuration, because everyone does the exact same job (with regards to their computer), or uses the same applications, you can stop here.  Consider yourself bored and under-challenged at work.  Skip down to “The Rest”.

Warning: Digression Ensues

Let’s dive into that bullet about “mapping” the associations, and start with a “case study” assumption:

“The law firm of ‘Shaft, Bender and Payne’ has 50,000 employees at 500 locations with the majority located at the headquarters in Phisthole, KY.  Employees use desktop and laptop computers, which are provisioned at remote locations using System Center Configuration Manager with OSD and MDT integration.”

“Newly-purchased computers are shipped to locations directly from the vendor, and arrive on pallets.  The shipping containers (cardboard boxes) provide the vendor’s “asset tag” (aka BIOS serial number) on the barcode sticker affixed to each box.  In addition, SBP prefers a naming convention which assigns an internal ‘Inventory Control Number’ as the device name, with a prefix code to identify the device form factor.  For example, ‘D-200501’ identifies a Desktop computer with inventory control number 200501.  The ICN is actually the BIOS serial number.”

“Each device is assigned to a distinct employee, and is configured to suit their membership in specific AD groups, and department.  User Phil McCrackin works in the Engineering department.  All Engineering users are provided with high-performance laptops which have the same suite of engineering software products installed, and are configured in Active Directory to place the account under their department OU, and added to three department-related security groups.”

“At the time each device is connected and powered on into a PXE session, the technician should not have to specify ANY input values prior to the task sequence being executed.  Good luck!  We’ll wait for you at the corner pub.”

There’s a lot of ways to approach this from an automation aspect, but ONE possibly option is to employ some custom scripts which are called by Task Sequence steps during imaging.  The scripts could query the local BIOS serial number, use that to query a remote database to find associated information, and then use the returned values to update task sequence variables and guide conditional branching for other steps (tasks).

Did that make sense?  Let’s try an example:

Maybe some IT folks got together, after sniffing glue, and added a custom database named “ComputerImaging” to their internal SQL Server host.  In that new database, they created a table named “dbo.ImageQueue” with the following columns:

  • ImageID (not null, identity, pk)
  • SerialNumber (not null, varchar)
  • Username (not null, varchar)
  • DateTimeRequested (not null, smalldatetime)
  • DateTimeCompleted (null, smalldatetime)

Another table named “dbo.DeptInfo” has the following columns:

  • DepartmentID (not null, int, pk)
  • DepartmentName (not null, varchar)

This table should be populated from a query against an official source within your organization, NOT manually entered!

Another table named “dbo.DeptAD” with the following columns:

  • DepartmentID (not null, int, fk)
  • ADOU (not null, varchar)
  • ADGroups (null, varchar)

And yet another table named “dbo.DeptUsers” has the following columns:

  • Username (not null, varchar)
  • DepartmentID (not null, int)

This table should be fed directly from an import job that reads from an HR database somewhere else (please do not host your custom DB mess on the same host as the HR database).

And finally, a SQL view named “dbo.v_ImageRequests” is created to combine the following columns:

  • ImageID, SerialNumber, UserName, DepartmentName, ADOU, ADGroups, DateTimeRequested, DateTimeCompleted

The diagram might look like this…


A third IT drug addict runs out of huffing spray paint cans long enough to build you a script that queries the WMI root\cimv2 provider class Win32_SystemEnclosure to get the SerialNumber and ChassisTypes property values from the local machine within the WinPE session during PXE.  The next section in that script forms a query to request a matching row from the view “dbo.v_ImageRequests”:

$query = “SELECT DISTINCT ImageID, SerialNumber, UserName, DepartmentName, ADOU, ADGroups, DateTimeRequested, DateTimeCompleted FROM dbo.v_ImageRequests WHERE SerialNumber = ‘$AssetTag'”

Using the returned row (if not null), it assigns each value to a task sequence variable, and concatenates the desired computer name using the ChassisTypes number via another mapping (either within the scripting using a switch/case block, or it could use another SQL query, or duct tape, tree branches and a pot of boiling water, whatever works for you).

So, what about the front-end?  That’s right.  Someone has to populate this thing with data to be queried.  Well, here’s a suggestion:

  • Identify a willing app developer or web developer.  If they’re unwilling, bribe or threaten them.  Food usually works, as does beer.
  • Get them to build you a form which accepts the following input values:  SerialNumber, and UserName.
  • When the form is submitted, it needs to insert that data into the table

You can easily connect a hand-held laser barcode scanner to a cheap computer, scan the boxes while they’re still on the pallet, and speed the form entry process.

Now, assuming you trusted the glue-sniffing IT nerds who wrote your scripts, you can update the SCCM task sequence, test it.  Use a VM guest and some forced input data (remember that VM guests have funky serial numbers in the WMI store) or a spare physical machine.  If all goes well, and you didn’t join them in sniffing glue, you should have a process that allows you to un-box the new machines, set them on a bench, plug them in and go.  The task sequence should call the script, which queries the local serial number, queries the remote database for the matching user, department, AD properties and even determines optional Install Applications items to install.

This process has been tested and used in production at several customers I know of.  The time-savings has ranged from 50% to 90% over the previous process.

DISCLAIMER:  This is obviously skewed by the unique naming convention constraint.  If the customer chose to use the BIOS serial number directly in the naming process, it would save even more time.  This would allow using the built-in features of MDT and SCCM with much less customization.

DISCLAIMER 2: Want me to code this up for you?  Contact me for further discussion.

Bonus Points:  If you add another script which simply performs a SQL “UPDATE” on the matching row “DateTimeCompleted” column in the dbo.ImageQueue table, it can do a “check-in” at the end of the task sequence to capture actual progress and metrics for large scale batch imaging.  Or not.  Just go enjoy a cold one.  You earned it.

Yes, I know some of this is reinventing the wheel.  But then again, this is a unique situation, offered only as a possible option, not the only option.

I told you I was going to digress.  But I’m not quite done yet.

The Rest

What about the remaining three (3) items?

  1. Inefficient Update processes
  2. Poor deployment infrastructure
  3. Inefficient staffing organization

Inefficient Update processes include manually recapturing images as new baselines.  That process typically happens less frequently, such as monthly or quarterly, but still… investigate new ways to do it automatically.

Poor Deployment Infrastructure denotes network configurations, equipment (aged, substandard, etc.), poor WAN links, incorrect DHCP scoping, DNS issues, etc.

Inefficient Staffing Organization is the messiest because it involves company cultures, office politics, emotional fog, and other human stupidities.  It can be very difficult to objectively assess this without outside perspective.  And even then, it’s not uncommon for the organization to listen to the outside recommendations, and do nothing about changing things.

And, I must sleep. Thank you for reading this!


2 thoughts on “And Now for a Discussion about Automation

  1. At some point you really should consider replacing your some of your GPO solutions with Desired State Configuration solutions since it is more cross platform (works with non-Windows hosts) and does NOT require the system to be domain joined. 🙂

    1. “some”, yes. That works better for server nodes but requires too much for desktop and laptop nodes, particularly when there are more than 10,000 nodes to manage. For servers it’s a very good solution for some duties. A good portion of server configuration done via GPO are mundane settings. Most of those are far easier to establish and maintain via GPO. Buy for roles and feature stack configuration DSC makes more sense.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s