Cloud, System Center, Technology

Deploy Office 365 ProPlus with Visio and Project using Configuration Manager 1807 with fries and a soft drink


Update: 2018-08-27

I meant to post this a few weeks ago, but anyhow… Microsoft released an updated ODT which removes the “Match Current OS” option from the language options list.  That seems to work fine.  However, the Project Online client has an issue with the Detection Rule being the same as Office 365 ProPlus.  So the install (deployment) on a machine with O365 Pro Plus (latest/same version) causes the Project deployment to think it’s already installed.  Just change the detection rule and it works.  The Visio Pro deployment uses a different detection rule and seems to work fine.

As for Shared Computer Activation deployments, there are at least two (2) ways to go.  One is supported, the other is unknown (at this point).  The first is to simply build a new deployment, which sounds like a waste of storage space (compared with traditional O365 ProPlus deployment builds using ODT and XML files) but if you have deduplication turned on for the content source location it shouldn’t be a concern.  The other (semi/un-supported) way is to manually copy the “configuration.xml” and make a Shared Activation flavor of it, then add a new deployment type, set some sort of condition to control the scope of that deployment type, and go that route.  A little more convoluted, but possible.

Update: 2018-08-08

While working with a customer to deploy Office 365 ProPlus, with SCCM 1806, we discovered a bug in the OCT that has been confirmed by Microsoft and hopefully fixed soon.  The bug is related to the language selection “Match the Current OS” with respect to (English-US) platforms.  That selection, for some reason, does not download the required language pack files for the deployment source, and causes the deployments to fail with an error that mentions “missing files”.

The catch here is it won’t return an error when it fails via SCCM.  The deployment fails but still returns “success” (exit code 0), and writes the registry key which is shown in the Detection Method configuration.  To see the error, we had to execute the setup.exe with /configure and the relevant .xml file from the client cache folder.  This is actually two (2) “bugs” as far as I can tell.

The fix/workaround is to simply select the actual language (e.g. “English (United States)”) rather than use the default “Match the Current OS”.


Someone asked if that surgeon on the left is Johan.  I cannot confirm, but it wouldn’t surprise me.


  • System Center Configuration Manager current branch 1807+
  • Basic knowledge about how to use ConfigMgr
  • Office 365 / ProPlus licensing
  • Coffee
  • A really bad attitude

Process Overview

  1. Create an Application and Deployment in ConfigMgr
  2. Target a Collection (devices or users)
  3. Drink coffee
  4. Punch someone in the face and go home (no, don’t do that)

Quick Notes

  • During the process of building and deploying the Office configuration, the ConfigMgr console will be locked out from making changes. You can still make it visible, but no scrolling/selecting is allowed.  Therefore, if you intend to deploy the configuration at the end of the procedure below, you should prepare the target collection in advance.
  • This will require access to the Internet in order to download the content for building the O365 deployment source.  If you don’t have access to the internet, you may want to look for a new job.
  • I posted a summary version of this using ConfigMgr 1806 on another blog here.  But this one has french fries.

Procedural Stuff

  1. Open the ConfigMgr admin console.  This is important.
  2. Navigate to Software Library > Office 365 Client Management
  3. On the Office 365 Client Management dashboard, scroll over to the far right until you see that high-quality uglyaficon icon with “Office 365 Client Installer” caption.  Click on it like you mean it.  (Just kidding about the icon, it really is nice)
  4. Give the Application a name (e.g. “Office 365 ProPlus with Visio and Project and Fries – 64-bit”)
  5. Enter a Description (optional)
  6. Enter the UNC source location path (the folder must exist, but the content will be populated at the end of this exercise).  It must be a UNC path. Drive letters are for losers.
  7. On the “Office Settings” page, click “Go to the Office Customization Tool” (or “Office Customisation Tool” for you non-American folks).  NOTE: If you do not already have the latest version of OCT installed, it will prompt you to download and extract it somewhere.  Then it will continue on.  Otherwise, it will just continue on.
  8. The OCT home page uses a layout designed by Stevie Wonder.  It’s very spread out, so, on a low-res display, expect some finger exercise workouts on your mouse or trackpad.  Anyhow, look for “To create or update a configuration file, click Next” and click on (you guessed it:) Next.
  9. The Software and Languages page will open first.
    1. Enter the Organization Name and select the Version (32 or 64 bit), then click Add.  IMPORTANT: Pay attention to the ADD and UPDATE buttons throughout this exciting journey, there is a reward at the end.  I’m just kidding, there is no reward, and no Santa Claus either.  Note also that while you’re making selections and changes, the information is being updated along the right-most column of the OCT form.
    2. Select the Software Suite or Product “Office 365 ProPlus” from the drop-down menu, and click Add
    3. Select the drop-down again, and choose “Visio Pro for Office” and click Add again.
    4. Select the drop-down again, and choose “Project Online Desktop Client” and click Add one more time.
    5. The Software section on the right-hand settings column should show all three selections.6
    6. Scroll down to Languages.  You HAVE to select an option here.  It is not optional.  The default choice for most situations will be “Match Operating System“, however, you can add more languages if you like, or just to have some fun with users by dropping unfamiliar languages on them.
    7. Scroll back up so you can view the navigation menu at top-left again.  Then select “Licensing and display settings
      1. For most situations, the KMS or MAK options will be disabled, with KMS automatically selected.  If yours is different, who cares, I’m writing this crappy blog post, not you.  So there.
      2. Under “Additional Properties“, you can select options to enable Shared Computer Activation, Automatically accept the EULA, and Pin Icons to Taskbar.  It’s worth noting that there is no longer a warning label about the taskbar icons, so it would appear to work on Windows 10.
      3. There is no “Add” or “Update” button to click for this part, so calm down, we’re almost there.
    8. Scroll up to the navigation menu again and select “Preferences“. This is where you may spend the rest of your life clicking on things, because there’s a lot to click on.  Or you may choose to ignore all of it and instead blame configuration issues on whoever handles GPO and MDM settings.  If that’s you, well, it sucks to be you.  Choose wisely.
    9. Take a moment to review your settings along the right-hand “Configured Settings” column.  Take a moment also to reflect on your poor choices in life, those missed opportunities, that last vacation trip, and how dysfunctional your family is (or could be).  Now, when you’re done with that, and put the loaded gun and liquor bottle back in the bottom drawer, and…
    10. Click “Submit” at the very top right.
    11. After you click Submit, you will be returned to the Application wizard.  Click Next.
    12. On the Deployment page, it will ask if you want to deploy the application now.  If you have a target collection ready to go, you can go for it. YOLO.
      If you choose Yes, you will be prompted for typical deployment configuration settings, otherwise, you’ll click Next two more times and then…
    13. Wait for the content to download and the deployment source to be prepared.
    14. Don’t forget to distribute the content to your DPs.
    15. Don’t forget to populate the target collection.
    16. Don’t forget to allow time for policy updates, etc.
    17. You can also modify Office 365 Client Installations by using the “Import” feature at the top-right of the OCT form.

I’d ask for feedback/comments on this, but nobody ever posts feedback or comments.


Projects, Scripting, System Center, Technology

SCCM and Chocolatey


Trying to leverage goodness from various mixtures of Chocolatey with SCCM is definitely not new. Others have been playing around with it for quite some time. However, I wanted to pause from a month of mind-numbing work-related things to jot down some thoughts, realizations, pontifications, gyrations and abbreviations on this.

Much of this idiotic rambling that ensues hereinafter is based on the free version of Chocolatey.  There is also a “Business” version that offers many automation niceties which you might prefer.  There’s a lot more to this Chocolatey thing than I can possibly blabber out in one blog post (even for yappy little old me), such as the Agent Service features, packaging, and so more.  Visit for more.

1 – Is it “Better”?

No.  It’s just different.  But, regardless of whether if “fits” a particular need or environment, it’s often nice to know there’s another option available “just in case”.

2 – Who might this be of use to?

I can’t list every possible scenario, but I would say that if the potential benefits are lined up it kind of points to remote users without the use of a public-facing (or VPN-exposed) distribution point resource.  It also somewhat negates the need for any distribution resource, even cloud based (Azure, AWS), since there’s no need for staging content unless you want to do so.

3 – How does SCCM fit?

At this point (build 1703) it’s best suited for use as a Package object, since there’s no real need for a detection method, or making install/uninstall deployment types.  A Program for installation, and another for uninstallation, are pretty much all that’s needed.

4 – How does an Install or Uninstall work via SCCM?

As an example, to install Git, you would make a Package, with no source content, and then create one Program as (for example only) “Install Git” using command “choco install git -y”, and another as “Uninstall Git” using “choco uninstall git -y”.  (Caveat: some packages incur dependencies, which may throw a prompt during an uninstall.  For those you can add -x before the -y, but refer to the Chocolately documentation for more details)

5 – How do you push updates to Chocolatey apps via SCCM?

You can use the above construct with a third Program named “Update Git” (for example) with command “choco upgrade git -y”.  Another option (and my preference) is to deploy a scheduled task that runs as the local System account, to run “choco upgrade all -y” at a preferred time or event (startup, login, etc.).  And, as you might have guessed by now (if you haven’t fallen asleep and face-planted into your cold pizza), someone has done this for you.

6 – Can you “bundle” apps with Chocolatey with or without SCCM?

Absolutely.  There’s a bazillion examples on the Internet, but here’s one I cobbled together for a quick lab demo a while back.  This one feeds a list of package names from a text file. You can also hard-code the list, or pull it from anywhere that PowerShell can reach it (and not just PowerShell, but any script that you can run on the intended Windows device).

7 – What about MDT?

Here’s a twist, you can deploy Chocolatey packages using MDT, or deploy MDT using Chocolatey.  How freaking cool is that?  If you sniff enough glue, you might even construct a Rube Goldberg system that deploys itself and opens a wormhole to another dimension.  By the time you find your way back, America will be a subsidiary of McDonald’s and we have real hoverboards.

8 – What about applying this to Windows Server builds?

You can.  I’d also recommend taking a look at BoxStarter, and Terraform.  I built a few BoxStarter scripts using Github Gists for demos a while back.  Here’s one example for building and SCCM primary site server, but it’s in need of dusting off and a tune up.  You can chop this up and do things all kinds of different (and probably better) ways than this.

The list of automation tools for building and configuring Windows computers is growing by the day.  By the time you read this sentence, there’s probably a few more.  Hold on, there’s another one.

PS – If you get really, really, reeeeeeally bored, and need something to either laugh at, ridicule or mock, you can poke around the rest of my Github mess.  I don’t care as long as you put the seat back down after flushing.

System Center, Technology

5 SCCM Myth Corrections

In keeping with the popular (and completely stupid) trend of “5 this…” and “10 that…” memes…


Myth 1 – Discovery Settings

Contrary to what you may hear (I hear it a lot, unfortunately) SCCM discovery settings do not modify the computers in your environment.  They simply allow SCCM to shine a flashlight on what’s in the environment.  Think of it like a restaurant menu.  The waiter hands it to you to look over and see what you’d like.  SCCM is the waiter. It’s not cooking anything until you place your order.  Discovery doesn’t install anything, or modify configuration settings, restart, etc.  The only situation when it could make changes to discovered computers would be having automatic client push installation enabled (in most cases you shouldn’t)

Myth 2 – Script Wrapping is not “Packaging”

“Packaging”, technically, is the process of creating the ORIGINAL installation payload (.exe, .msi, .whatever).  When you create a new package from an existing package, that’s technically called “repackaging”.  Executing a package with additional (optional) command line switches is simply called “using the package”.  Putting the package and switches into a script file is called “script wrapping”.  Entering the package name, with optional switches, into the SCCM console is called “doing your job”.

Myth 3 – “Packaging” is NOT “Repackaging”

When Microsoft builds the setup.exe for something like Office or Visual Studio Code, that’s called “Packaging”.  When you take that package, initiate a snapshot monitoring session on a reference computer using Flexera AdminStudio Repackager*, run the package installer, make some system changes, add stuff, remove stuff, laugh at stuff, yell at stuff, capture the changes, create a project, clean it up, bitch and moan a lot, drink some coffee, crank out the package solution, compile that into an .MSI or .EXE, copy it over to another folder location, go outside and scream the anger and frustration out of your soul, throw your cold, empty coffee up at the nearest hard surface, then THAT is called “repackaging”.

Real, honest, true, repackaging sucks.  It’s not a fun job.  It’s like rebuilding a transmission to a typical auto mechanic, or removing impacted feces from a cow for a veterinarian.  Un-fun.  Last resort.  The kind of thing that makes you internalize a lot of anger and frustration at how the situation COULD have been avoided, had someone taken the time to do something differently at an earlier stage.  But NOooooooooo… it then becomes YOUR problem to deal with.  Anyhow, hourly billing offers some solace.

Myth 4 – SCCM will NOT automatically reimage all your computers just because you enable PXE with OSD

I’m not even going to waste time on this one.  If you don’t believe me, you’re an idiot, but don’t be offended.  Lots of people are idiots.  Most of them write blogs like this one.  Hey, wait a minute!?

Myth 5 – MDT and SCCM Can coexist.  SCCM and WSUS can coexist

I don’t know where it started, or who started it, but whoever said SCCM cannot coexist in the same environment with separate instances of MDT or WSUS was wrong.  It can.  It does.  In some cases, it’s even recommended.  Like anything else in IT, it comes down to having a solid technical and business case for doing it.


System Center, Technology

Why SCCM Doesn’t Accidentally Image Machines

I’ve finally had enough.  Maybe it’s the result of hearing people just blindly repeat false garbage and claiming it as fact (I call it “phact” now).  But after hearing yet another so-called (another meme-ish aphorism du jour) engineer state to a group of other so-called engineers that “SCCM can ‘just randomly reimage computers'” because either:

A. They’ve seen it, or more often…

B. They heard a friend say they saw it happen.

Truth:  NO.  SCCM CANNOT RANDOMLY REIMAGE COMPUTERS.  IT DOES NOT.  IT WILL NOT.  IT CAN’T.  IT WON’T.  Stop saying stupid shit like this.

The real reason is that someone (aka a stupid idiot, yes, double redundancy intended) was poking around and made changes without knowing what they were doing.  That’s it.  I’ve seen “unintended” cases of SCCM involved with reimaging computers, but it was ALWAYS ALWAYS ALWAYS (and still is) due to human stupidity.

I’m probably missing a few steps here, in fact, yes, I see one right now:  The Task Sequence Deployment setting labelled “Make available to the following” from the Deployment Settings tab (e.g. “Only media and PXE” versus “Configuration Manager clients, media and PXE”, etc.)


In short, your resident idiot would have to target the wrong collection, OR, put the wrong machines into the targeted collection, OR, use the wrong deployment assignment setting, AND…

Have the machine on a subnet with access to PXE, AND boot to the network (boot config), AND press F12 before the boot time-out expires, AND (either) did not put a password on the Task Sequence deployment OR entered the password.  That’s a lot of “accidental” stuff to accidentally trip over by accident.  Maybe your admin needs a walker and a crash helmet.

System Center, Technology

Dave’s SCCM Current Branch Packing List (Updated)


Going on a ConfigMgr installation hike?  Need to pack some useful stuff for the journey.  Drop them on a thumb drive; copy to cloud drives (more than one); strap onto an Alpaca with duct tape; roll-on/shove out of a C-17? whatever works.

Updated 11/26/2016 – includes newer versions.

Updated 7/21/2016 – added UserVoice link below.

Did I miss other useful links?  Let me know?

business, System Center, Technology

And Now for a Discussion about Automation


This part is important for setting the context of the discussion that follows.  This is another one of my verbose deep-dive rants involving technology and human interference factors. 🙂

So, a close friend and colleague asked me to look over the SOP his new employer uses for imaging Windows computers.  I read through it, and began red-lining parts which could be “improved upon” in some fashion.  Some of the comments looked like this:

  • “Remove this and use a GPO”
  • “Get rid of this prompt. Use the first input to do a remote query for the rest”
  • “Never ask this!  Drive it from other input values.”
  • “Is this a kiosk or conference room desktop or what?”
  • “Mandatory profile and LP settings, and sprinkle some GPO sauce on it.”
  • “Do users get local admin rights?  Please tell me they do not!”

It got me to step back about 200 feet (conceptually speaking, because my house isn’t nearly that big), and apply a broader perspective.

I started to jot down some more, higher-level questions:

  • Do you get a list of serial numbers when the boxes are shipped, or only when they arrive?
  • Do you have a SQL Server instance you can add a custom DB onto?
  • What are the WAN links like?
  • Do you want to push apps to users or have them initiate the installations?
  • How well are you associated with the IT, HR and Finance managers?
  • Do you have a Starbucks nearby?  Never mind.

What I often see in customer environments is not that surprising to most consultants: a tendency to hold onto whatever works, quite often for too long.  It’s easy to blame the IT staff, but actually I find it’s more often a symptom, rather than a root cause.  The actualy root cause being an inefficient operational tactic, driven by a poorly conceived IT services strategy, almost always aimed at “cost cutting” rather than true innovation.

For example, hundreds of hours wasted annually on building, maintaining and troubleshooting dozens of fat Ghost images, initiated from DVD disks or USB thumb drives.  One IT person doing all the imaging, when they could be delegated or automated.

A glaring sign of this is the all-too common over-consolidation of job duties.  The net result of which is a relentless march towards the next fire to extinguish, rather than having the luxury of time and resources to design and build fireproof homes (again, metaphorically speaking).

…a relentless march towards the next fire to extinguish, rather than having the luxury of time and resources to design and build fireproof homes.

After allowing myself to stare off into space, pondering the quagmire this IT world is becoming, I had to pull myself back, like the line of crew members slapping that hysterical passenger in the movie Airplane.


Enough of that pseudo-intellectual babble!  This is serious stuff, right?!  Okay, too much coffee.  Let’s calm down…

Low-Hanging Fruit

If I had to break down and prioritize a list of “what’s wrong” with most imaging processes I encounter today, it might be as follows:

  1. One-Size-Fits-All image libraries
  2. Too many manual steps **
  3. Inefficient Update processes
  4. Poor deployment infrastructure
  5. Inefficient staffing organization

I won’t dive into the first item since it’s already been blogged to death and everyone else has covered it as well as (or better than) I could.

As for item #2, however, I can digress profusely.  Ha ha.  But rather than rationalize and pontificate, I’ll summarize some (hopefully) helpful guidelines.  I’ll save the last 3 for later.

Eliminating Manual Work

  • Define the smallest number of common role configurations:
    • Often referred to as “COE configurations”
      • End-user devices (desktops, laptops), power-user devices (workstations), kiosks and conference room devices, and headless controllers, to name a few.
    • Start with a common baseline OS configuration (if possible)
    • Use run-time logic to steer the imaging process based on conditions.
      • Use task sequences to map and execute the branch configurations (for the above roles)
  • Identify configuration goals
    • Drift and No-Drift items:
      • Which items can “drift” and which should not drift, once placed into production. If it can “drift”, it can be implemented as a “start” or “baseline” and left open for user’s to modify.
    • If the device will be domain-joined:
      • If it cannot drift, remove the manual configuration from the image and use a more efficient (and less cumbersome) technology like Group Policy.
    • If not domain-joined:
      • Lock it down via mandatory profiles, local policy settings, and restricted permissions.
    • In short – GET SHIT OUT OF THE IMAGE if it can be done by other means more efficiently and effectively.  Seriously, so many places have pages of procedures walking the engineer through making a ton of manual configuration changes to the image, which could be done by GPO checkboxes.
  • Identify Organizational Role Mappings
    • That sounds complicated, but it’s not.  This is really about identifying what a configuration is driven by.  If the end-user determines the configuration, then start with that.  If it’s the users’ department or organizational group, start with that.
    • Build a mapping that says “if ___ then ____” (e.g. “If Engineer then (list of properties, configuration settings, apps, etc.)”
    • Take that mapping and devise a means to automate the shit out of it.  That’s right, the shit out of it.  That’s a technical term.

The goals should be:

  • Reduce the image library size (variations)
  • Reduce the image content size (capacity)
  • Increase flexibility (adaptability)
  • Increase automation

Decision Point:  If the organization uses a real “one-size-fits-all” configuration, because everyone does the exact same job (with regards to their computer), or uses the same applications, you can stop here.  Consider yourself bored and under-challenged at work.  Skip down to “The Rest”.

Warning: Digression Ensues

Let’s dive into that bullet about “mapping” the associations, and start with a “case study” assumption:

“The law firm of ‘Shaft, Bender and Payne’ has 50,000 employees at 500 locations with the majority located at the headquarters in Phisthole, KY.  Employees use desktop and laptop computers, which are provisioned at remote locations using System Center Configuration Manager with OSD and MDT integration.”

“Newly-purchased computers are shipped to locations directly from the vendor, and arrive on pallets.  The shipping containers (cardboard boxes) provide the vendor’s “asset tag” (aka BIOS serial number) on the barcode sticker affixed to each box.  In addition, SBP prefers a naming convention which assigns an internal ‘Inventory Control Number’ as the device name, with a prefix code to identify the device form factor.  For example, ‘D-200501’ identifies a Desktop computer with inventory control number 200501.  The ICN is actually the BIOS serial number.”

“Each device is assigned to a distinct employee, and is configured to suit their membership in specific AD groups, and department.  User Phil McCrackin works in the Engineering department.  All Engineering users are provided with high-performance laptops which have the same suite of engineering software products installed, and are configured in Active Directory to place the account under their department OU, and added to three department-related security groups.”

“At the time each device is connected and powered on into a PXE session, the technician should not have to specify ANY input values prior to the task sequence being executed.  Good luck!  We’ll wait for you at the corner pub.”

There’s a lot of ways to approach this from an automation aspect, but ONE possibly option is to employ some custom scripts which are called by Task Sequence steps during imaging.  The scripts could query the local BIOS serial number, use that to query a remote database to find associated information, and then use the returned values to update task sequence variables and guide conditional branching for other steps (tasks).

Did that make sense?  Let’s try an example:

Maybe some IT folks got together, after sniffing glue, and added a custom database named “ComputerImaging” to their internal SQL Server host.  In that new database, they created a table named “dbo.ImageQueue” with the following columns:

  • ImageID (not null, identity, pk)
  • SerialNumber (not null, varchar)
  • Username (not null, varchar)
  • DateTimeRequested (not null, smalldatetime)
  • DateTimeCompleted (null, smalldatetime)

Another table named “dbo.DeptInfo” has the following columns:

  • DepartmentID (not null, int, pk)
  • DepartmentName (not null, varchar)

This table should be populated from a query against an official source within your organization, NOT manually entered!

Another table named “dbo.DeptAD” with the following columns:

  • DepartmentID (not null, int, fk)
  • ADOU (not null, varchar)
  • ADGroups (null, varchar)

And yet another table named “dbo.DeptUsers” has the following columns:

  • Username (not null, varchar)
  • DepartmentID (not null, int)

This table should be fed directly from an import job that reads from an HR database somewhere else (please do not host your custom DB mess on the same host as the HR database).

And finally, a SQL view named “dbo.v_ImageRequests” is created to combine the following columns:

  • ImageID, SerialNumber, UserName, DepartmentName, ADOU, ADGroups, DateTimeRequested, DateTimeCompleted

The diagram might look like this…


A third IT drug addict runs out of huffing spray paint cans long enough to build you a script that queries the WMI root\cimv2 provider class Win32_SystemEnclosure to get the SerialNumber and ChassisTypes property values from the local machine within the WinPE session during PXE.  The next section in that script forms a query to request a matching row from the view “dbo.v_ImageRequests”:

$query = “SELECT DISTINCT ImageID, SerialNumber, UserName, DepartmentName, ADOU, ADGroups, DateTimeRequested, DateTimeCompleted FROM dbo.v_ImageRequests WHERE SerialNumber = ‘$AssetTag'”

Using the returned row (if not null), it assigns each value to a task sequence variable, and concatenates the desired computer name using the ChassisTypes number via another mapping (either within the scripting using a switch/case block, or it could use another SQL query, or duct tape, tree branches and a pot of boiling water, whatever works for you).

So, what about the front-end?  That’s right.  Someone has to populate this thing with data to be queried.  Well, here’s a suggestion:

  • Identify a willing app developer or web developer.  If they’re unwilling, bribe or threaten them.  Food usually works, as does beer.
  • Get them to build you a form which accepts the following input values:  SerialNumber, and UserName.
  • When the form is submitted, it needs to insert that data into the table

You can easily connect a hand-held laser barcode scanner to a cheap computer, scan the boxes while they’re still on the pallet, and speed the form entry process.

Now, assuming you trusted the glue-sniffing IT nerds who wrote your scripts, you can update the SCCM task sequence, test it.  Use a VM guest and some forced input data (remember that VM guests have funky serial numbers in the WMI store) or a spare physical machine.  If all goes well, and you didn’t join them in sniffing glue, you should have a process that allows you to un-box the new machines, set them on a bench, plug them in and go.  The task sequence should call the script, which queries the local serial number, queries the remote database for the matching user, department, AD properties and even determines optional Install Applications items to install.

This process has been tested and used in production at several customers I know of.  The time-savings has ranged from 50% to 90% over the previous process.

DISCLAIMER:  This is obviously skewed by the unique naming convention constraint.  If the customer chose to use the BIOS serial number directly in the naming process, it would save even more time.  This would allow using the built-in features of MDT and SCCM with much less customization.

DISCLAIMER 2: Want me to code this up for you?  Contact me for further discussion.

Bonus Points:  If you add another script which simply performs a SQL “UPDATE” on the matching row “DateTimeCompleted” column in the dbo.ImageQueue table, it can do a “check-in” at the end of the task sequence to capture actual progress and metrics for large scale batch imaging.  Or not.  Just go enjoy a cold one.  You earned it.

Yes, I know some of this is reinventing the wheel.  But then again, this is a unique situation, offered only as a possible option, not the only option.

I told you I was going to digress.  But I’m not quite done yet.

The Rest

What about the remaining three (3) items?

  1. Inefficient Update processes
  2. Poor deployment infrastructure
  3. Inefficient staffing organization

Inefficient Update processes include manually recapturing images as new baselines.  That process typically happens less frequently, such as monthly or quarterly, but still… investigate new ways to do it automatically.

Poor Deployment Infrastructure denotes network configurations, equipment (aged, substandard, etc.), poor WAN links, incorrect DHCP scoping, DNS issues, etc.

Inefficient Staffing Organization is the messiest because it involves company cultures, office politics, emotional fog, and other human stupidities.  It can be very difficult to objectively assess this without outside perspective.  And even then, it’s not uncommon for the organization to listen to the outside recommendations, and do nothing about changing things.

And, I must sleep. Thank you for reading this!

Projects, Scripting, System Center, Technology

Deploy Visual Studio Code w/Git using SCCM

Ingredients and Downloads

  • System Center Configuration Manager Current Branch
  • Visual Studio Code 1.0
  • Git 2.8.1 extension (32 or 64 bit, I chose 64-bit, and so should you)
  • Place both downloads in a common folder for this procedure


  • You can accomplish this several other ways as well, including Chocolatey and scripting, if you prefer.  This is just one way to do it.
  • The VS Code install is 32-bit, while the Git extension is 64-bit (it’s based on the host operating system, not the VSCode instance)
  • This particular deployment will involve:
    • One (1) Application object, with
    • Two (2) Deployment Types: One for Visual Studio Code, and the other for Git

Procedure Images




  1. Create a new Application
    1. Manually specify the application information
    2. Fill-in properties (name, publisher, version, etc.)  For my example “Visual Studio Code with Git”, “Microsoft” and “1.0”
  2. Create new Deployment Type (click “Add”)
    1. Type = “Script Installer” (relax, no scripting will be required)
    2. Manually specify the deployment type information
    3. Fill-in properties (e.g. “Visual Studio 1.0”)
    4. Specify Content Location (UNC path)
    5. Select “VSCodeSetup-stable.exe” using the Browse button (refer to Mike Robbins’ info about command-line options: link)
    6. Place cursor in the same box as the filename, at the very end, append  ” /VERYSILENT /NORESTART”.  Should look like this:
      “VSCodeSetup-stable.exe /VERYSILENT /NORESTART”
    7. Place cursor in the “Uninstall program” box and paste in:
      “C:\Program Files (x86)\Microsoft VS Code\unins000.exe” /SILENT
  3. Create a Detection Rule:
    Setting Type = Registry
    Key = SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall\{F8A2A208-72B3-4D61-95FC-8A65D340689B}_is1
    (yes, that is a rather funky GUID-ish key name)
    Value = “DisplayVersion” (without quotes)
    Data Type = “Version”

    Select the lower radio button (“This registry setting must satisfy the following…”)

    1. Operator = “Greater than or equal to” / Value = “1.0” (without quotes)
  4. Specify User Experience
    1. Installation behavior = Install for system
    2. Logon requirement = Whether or not a user is logged on
    3. Installation program visibility = Hidden
    4. Maximum allowed run time = 60 (if it takes this long or longer, buy new hardware)
  5. Specify System Requirements
    1. Follow the guidelines here.
  6. Specify Dependencies
    1. If you’re deploying to Windows 10, take this moment to stretch and yawn.  If you’re running Windows 7, read the base requirements for things like .NET Framework 4.5, etc. here.
    2. In my case, I left this blank since my target machines have everything needed.
  7. Next / Next / Finish
  8. Create Another Deployment Type, repeat steps 2 – 7 above, with the following changes:
    1. Select the Git installer file (e.g. Git-2.8.1-64-bit.exe) and append the same arguments: /VERYSILENT /NORESTART
    2. Enter uninstall string:
      “C:\Program Files\Git\unins000.exe” /SILENT
    3. Specify the Detection Rule
      Type = Registry
      Key = SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Git_is1
      Value = DisplayVersion
      Type = Version
      Greater than or equal to 2.8.1
  9. Complete the Deployment Type
  10. Finish the Application

Next steps are the usual stuff:

  • Distribute the content to the appropriate Distribution Points / DP Groups (maybe just a subset for pilot testing at first, then the rest when all is good)
  • Create or designate a target Collection
  • Go!

Let me know if this is helpful or not and what I could do differently to make this a better solution?  Please remember to rate this article – Thank you for reading!