checklist_image

In part 1 of this article I described (okay, I blabbered) about the idea of taking on a “do-it-yourself” project to make a subset of System Center Configuration Manager for a smaller environment.  As I mentioned in part 1:  If you’re in a larger environment, it makes more sense to buy the real product.  It’s far more robust than any one person has time to devote to developing (unless all you do is develop such things; in which case, more power to you!).

Part 1 focused on what I called the 2 “core” feature sets of SCCM, which are Inventory Collection & Reporting, and Software Deployment.  I ended that with a semi- deep-dive into the first core feature.  In this article I will continue on with the software deployment feature set.

Software Deployment

What is “software deployment”?  You might snicker and say that it’s “deploying software”, and you’d be correct, technically.  However, it’s really more than that.  There’s quite a few moving parts (conceptually and process-wise) to consider and each has it’s own challenges.

Many of the same nuts and bolts apply to this feature realm as do for Inventory data collection.  The following are just a few:

  • Windows Management Instrumentation (WMI)
  • The Windows Registry
  • The Windows File System
  • Security Context and Security Descriptors
  • Windows Services
  • Windows Event Logging

But first, let’s take this imaginary airplane up to a higher level and work our way down again…

The Scarry Stuff

Software Deployment hinges on, in fact it relies entirely upon, silent installation capabilities.  In order to “push” an installation (or uninstall) to a large number of computers at once, with minimal impact on users and business operations, it really needs to work quietly and without disrupting things.  As much as possible, anyway.

Software installations, or as I’ll call them “packages” can come in a variety of shapes and you can (and will) make your own (also called “repackaging” or “wrapping”).  The most common forms are Windows Installer (aka “msi”) packages, and InstallShield or “setup.exe” packages.  Even many “setup.exe” packages are actually wrappers for one or more MSI packages, but I’ll stay out of the weeds for now on this.

Moving along… the vast majority of both types of packages produced in the past five years support built-in silent install options.  You’ve probably seen them as “msiexec /i /qn” or “/quiet” or “setup.exe -S” and so forth.  Some vendors create their own installers and offer their own unique set of silent install features.  But there’s more than just the silent part.  There’s also the pre-configuring part.

It usually won’t lighten your IT workload if you push out 5,000 silent installs, only to find out the next morning that everyone still needs to enter an activation code, or answer some “setup” prompts, before they can start using the product.  Thankfully, most modern installation packages provide features for slipping in custom settings and so on.  The level and manner by they support this varies quite a bite, from slick GUI tools like those provided with Microsoft, Autodesk, TechSmith, and Adobe, to special downloads intended just for “enterprise” deployments (Google, TechSmith, Oracle, etc.)

Silent, pre-configured installations are only the beginning.

The usual list of hair-pulling challenges that deployment engineers enjoy are dealing with updates, upgrades, uninstalls.  You might think those are simple, and sometimes they are.  Usually however, they involve a lot of condition checking, and backing-up and restoring things as well.

There also the wonderful licensing activation aspects to deal with.  For example, some vendors allow you to upgrade “in-place”, essentially importing the existing settings and license data during the upgrade.  Others use files or registry keys to copy and reload data.  And others, who live in the fiery pits of Hell force you to contact them to deactivate the old license before a new one can be activated and used.

And you wonder why some IT folks drink too much.

But that’s still not all.  There’s also the logistics side of things.  Where to store (and share) the original source content, as well as the modified or custom content you create for making it work, and then the deployment part where you stage content for remote computers (and users) to access from remote locations.  The more spread-out your WAN is, the more significant this can be in your planning efforts.

Let’s also not forget the platform conditions.  Does the machine have sufficient disk space, CPU and memory (RAM) to attempt an installation of a given product?  Does it run on the supported version(s) of Windows and Windows “SKU” types, as well as service packs?  Is it 32 or 64 bit?  How about the dependencies, such as .NET framework versions, Java Runtime versions, and the others like Oracle clients, SQL native or Express editions, Quicktime player, Adobe Flash Player, Air, and Silverlight?  Oh my.  Drink up!

The more applications in your aggregate mix of goodies, the more intricate this web of dependencies and potential conflicts becomes.  Some tools are available to help with this, such as Microsoft’s Application Compatibility Toolkit (ACT), Microsoft Assessment and Planning (MAP) toolkit, and Flexera Software’s AdminStudio, to name a few.  But you’ll spend as much time with smaller utilities as well, such as Sysinternals’ PsTools and Process Monitor (among the others), Orca, WMI Explorer, and whatever your preferred flavor of log viewer and script code editor happens to be.

After reading all this mess I’ve laid out above, your backpack still isn’t quite ready for the mountain journey ahead of you.  Nope.

You will need to practice your Kung Fu on the wooden statue of Windows command-line tools.  Open up CMD and get familiar with some of the most commonly used commands:

  • MSIEXEC
  • REGSVR32
  • REG (query, add, delete, import, export, compare, flags, etc.)
  • SC
  • DISM
  • ICACLS
  • WMIC
  • ROBOCOPY (yes, it’s hidden in there now along with XCOPY)
  • SHUTDOWN
  • SBDINST
  • PRNCNFG, PRNDRVR
  • SCHTASKS
  • DRIVERQUERY

Those should get you started.  Learn the options; search for examples and test them (on a non-production computer of course).

Refresh your NTFS and Share permissions skills.  Pay attention to how computers will access things over the network when running tasks using their local SYSTEM account.

Finally, and this depends on your involvement with Active Directory management:  Group Policy.  I don’t usually recommend deploying software using GPO settings, even though it’s easily done.  It’s just that it doesn’t offer enough flexibility and granularity for doing pre-validation and exception handling that other methods provide.  Still:  Understand how GPO and Group Policy Preferences play into the overall mix of controlling the configuration of computers and users.  Example…

You will often find yourself needing to turn a feature or service on or off, enable or disable something, or make a configuration change, prior to installing software as part of a deployment.  You could build those things into your deployment package, but in many cases you will realize it is not only 100 times easier, but 100 times more reliable, to do it with a properly-configured GPO.

Back to the Camp Fire and Marshmallows

That was a long-winded horror story indeed, and just in time for Halloween.  I didn’t lay all that on you to make your eyeballs fall out.  I did so because it’s like diving into building a custom car, from a pile of parts, and not discussing how nuts and bolts work with wrenches.  It’s a complicated process at times, and there are many moving parts to consider.

So.  How then do you “automate” software deployments?  Here’s the 101-level, 500,000 foot view of the process:

  1. Prepare the silent, pre-configured installation
  2. Test the installation
  3. Share it
  4. Invoke the installation remotely, locally or by having computers request it on their own
  5. Collect the completion status results for monitoring

That last one might have caught you off guard.  But if you don’t build in some sort of “one stop shop” for watching over this assembly line, how will you ever know if boxes are falling off the conveyor belt in the back of the building?

The entire “point” of “automation” is saving time.  It’s also about making things correct, consistent and reliable.    If you can’t see what’s going on from start to finish, you can’t ensure things are correct, consistent and reliable.

I’ve written a few books on the packaging side of things, and there are plenty more books and web sites out there already.  There’s also plenty of great GPO learning resources available, such as GPAnswers.com, MyITForum, StackOverflow, and Microsoft TechNet.  So I’m not going to dive into those two aspects here.  I do however want to cover the rest of this:  The deployment and monitoring aspects.

Rolling up Sleeves

Laughable to some, but still able to win a fight, is the nifty and built-in Task Scheduler.  Equally cool is the command-line version: SCHTASKS.  If you want to automate deployments, you will probably want your computers doing as much of the work on their own as possible.  The trick is getting them off the training wheels and rolling along without falling over.  By that, I mean setting up the tasks to run correctly and making sure you don’t set too many computers to run things at the same time on the same days.  That last sentence depends upon how many computers you have, your network bandwidth quality, and the server(s) capabilities.

That’s actually step 2.  Step 1 after all is having a package to run.  But there’s still a missing block in this flowchart.  If you have a bunch of packages or scripts to install individual applications, and a scheduled task, how do you put those together?  And, more importantly, how do you make sure packages are executed in a preferred “order”?

You need some sort of “client” or “package manager” process in between those two.  This is easy to do with a script.  Basically, you write (or aquire, ha ha) a script that invokes the packages you wish to deploy using a list that controls the order and who gets what.  Then you schedule that script to run from each computer, using it’s local SYSTEM account.  Why SYSTEM?  Because (a) it’s built in, (b) it usually has sufficient access rights to perform all the tasks necessary for installing things and more, and (c) you don’t have to bother with managing a password.  It’s just easy.  You could use other accounts, and that’s fine as long as it works.  Your choice. Democracy rules.

The choice of scripting language doesn’t matter.  What ever works for you, and is compatible with all the computers you intend to manage, should be fine.  Do some testing to make sure before you go “all in” though.  If you intend to go with PowerShell, I would strongly recommend considering the latest preview version of 5.x, as it includes some incredible performance improvements, especially for COM-interop needs.

Getting Closer

That covers the packaging, the deployment, and now comes the monitoring.  Believe it or not, the monitoring part can consume the most thought and effort to develop to your liking.  This is because monitoring leaves the most aspects to personal preference than any other part of this chain of events.  But let’s take this in baby steps…

For starters, you need something to provide a status or “result” following each installation event.  In fact, you should create some sort of record from the very start of each installation request, through each internal step and on to the final result.  Error or “exit” codes are the most common (and most reliable) key to use for reporting status during and after each step of a process.  If you’re not familiar with even the most basic result codes like 0, 1605 or 3010, you really should pause here and do some homework.

The next step is producing a “record” of these pieces of data.  That usually means creating a log file, but you can use other options, such as database tables and so on.  Log files are arguably the “easiest” approach.  You can redirect output from scripts to files very easily.  Then the files can be uploaded to a central share, and from there you can apply other tools to collect summary results for reporting.  This is where the personal preference part comes in usually.  There are many alternate ways to collect and report status information.

One option, and this is only ONE, so no worries if you hate this one:

Client script resides in a shared folder on a server.  Scheduled Tasks on each computer invoke that script at a set day/time interval.  The script reads the computer name and uses that to find matching package names in a list.  The packages are scripts that run installers, updates, or uninstalls (heck, any task that a script can accomplish is fair game).  As each package is executed, it generates a log file which is copied up to another shared folder on the server (can be the same or a different server, it doesn’t matter).

You should now have a working process that delivers a few basic/simple packages to clients and returns some log files to the server.  Then you apply another script, with or without a scheduled task, to produce one or more summary reports of how things are working or not working.

Additional Considerations

You still need to consider how to prevent computers from running the same list of packages again once they’ve been completed.  Again, this is a matter of personal preference and not difficult to do.  You need to decide on this before you start building the “system” however, as it plays a huge role in how things work.

You need to provide a means for sharing dependency packages, so you don’t end up creating duplicate/redundant copies (.NET, JRE, etc.) for all the packages that need them.

Once you start hitting a limit on concurrent tasks from multiple computers against the server, you need to start thinking about staggering their schedules and/or spreading the server shares out onto more hosts.  Actually, if your computers are spread across many LANs, you will likely end up doing that anyway unless your WAN links are really good.

The best option for managing lists, queues, collections of data and reporting is through a database.  If you haven’t already decided on that, keep that on your to-do list.  This doesn’t have to include the client-side logging aspects, but don’t rule that out either.

Baby steps.  Even though this entire discussion is aimed at a small environment (fewer than 100-200 computers), that doesn’t mean you should dive right in without doing careful planning and testing.

Conclusion

System Center Configuration Manager is obviously the king of this entire realm of systems automation, at least in a Windows environment.  If you can justify (and afford) it, then I would always recommend it over building your own.  But if your environment is in that odd range of “too small for enterprise tools and budget” but too big to keep managing software manually, then this might be an option for you.  As I mentioned in Part 1, it’s really nice that Microsoft gives you access to the same tools and capabilities they use internally, and that there’s so many available resources to get started.

I’m fully aware I brushed over some important details in some of the main parts of this discussion.  That’s fine.  I didn’t want to go too far “in” until I can see what feedback I get on this.  If you want more, I will be glad to work on that.  However, I do have a day job, so I’m playing wait-and-see.  In any case, thank you for reading this and as always: let me know what you think of this?

Thank you!

Advertisements

One thought on “Sticks n Mud: Building Your Own SCCM – Part 2

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s