I’m sure some of you read that headline and spit your milk through you nostrils.  Probably followed up with some expletive words and at least one of your pets tore off out of the room in fear of their life.  This isn’t going to be short, so grab a drink and sit back while I go into more detail. I’m not saying it’s easy, and even it were “easy”, how do I quantify that?

It’s not a joke.  Allow me to digress…

Grinding-Gears

The Basics

What is SCCM?  Well, System Center Configuration Manager.  Ha!  I had to go there.  But as far as the product itself is concerned and the diverse features it provides today, it’s a kitchen sink of kitchen sinks.  But at the core, it’s still pretty much what it was from the beginning, along with some bolted-on additions.  The core has always been:

  • Inventory data collection and reporting
  • Software deployment and monitoring

Not long after it rolled out of the garage, it grew a bigger engine, some new seats and a bigger trunk.  Microsoft added Metering, Software Updates, and Operating Systems Deployment (OSD).  Then came Endpoint Protection, Asset Intelligence, and improvements to Applications and Deployments and Site hierarchical frameworks, and on and on.  Those are nice, and very powerful features, no doubt.  But dare I say the vast majority of SCCM customers today still lean most upon those two bare-knuckled core features.

If that’s enough to suffice for you as it is for many other shops, well, this is well within your reach. The extent of ones reach always comes with a list of caveats.  If you like to tinker and explore.  If you understand conditions and Booleans and if/then expressions.  If you understand files and shares and all that.  Then you’re probably equipped to play with this stuff.  If not, then just read it for the entertainment value and try not to fall asleep on me.

Is this going to get you in trouble with Microsoft?  Hell no!  They give you the same tools to play with that they themselves use to build the products they sell.  Think of it like shopping for cars versus auto parts;  Buy versus build.  Except, imagine if Ford and GM gave you a warehouse full of parts and tools at no additional cost, once you bought a vehicle from them.  That would be pretty cool.

The Machinery

Essentially what SCCM provides is a client-server architecture. Most of the processing aspects are asynchronous, meaning that the server doesn’t have to wait on the clients to do their thing before it continues on with other chores.  Clients don’t have to wait on the servers either, except for when content is not yet ready for assigned deployments.  And even then, they can continue on with other tasks; checking back once in a while to see when the meal is on the counter and the bell is rung.

But at the most basic level, you have a server that provides stuff, and receives stuff, and clients that receive stuff from the server, and submit stuff back to the server.  It’s worth noting that the server receives and sends with other servers as well as clients.

The Clients get an agent that reads instructions from the server, does what the instructions tell it to do (inventory, status, deployment executions, updates, etc.) and the clients do their thing and report back with the results when told to do so (scheduling stuff).

The server holds most everything in a database.  Actually, in a SQL Server database.  This includes site configuration settings, application and package configurations, deployment assignment configurations, schedules, and so on.  It also contains all of the data reported back from the client agents.  This includes hardware and software inventory, all sorts of CIM data, client health and statistical stuff, and lots of queries and reports to help sort and find what you need to see on the other end.

Granted, this is a very, and I mean VERY, high level abstraction.  The specific inner workings that convey each step to the next step are much more detailed and immaterial to this discussion.  So if you thought I was going to sprinkle some magical inventory dust and come behind it with a magical collection mop and it “just works“, well, you can relax.  It’s a pocket watch that keeps great time.  If you look inside the case your eyeball might fall out.

So, back to the discussion: The core features relate to querying and configuring clients, hence the name “configuration manager”.  Let’s dissect these two core features into separate sub-discussions.

Inventory Data Collection

The basic building blocks on the inventory side revolve around WMI (Windows Management Instrumentation) and the Win32 CIM classes.  Most of which are under the root\cimv2 namespace (yes, I know, there are others, but please, drink your soda and let me make a fool of myself, mmkay?).

If you’re not at all familiar with WMI then this entire article is not what you should be reading.  This discussion is aimed at the mechanics who work on cars all day and want to sit back and chat about building their own car while drinking some cold beer.

Let’s look at the basic ingredients and what you might have to spend.  This is what Microsoft gives you*:

Pardon my French, but holiest of shits!  They could have easily ignored such an effort.  Yet, they were making the effort even when they had absolutely NO need to do it; when they were in the enviable position of unquestionable business and consumer market dominance in multiple vertical segments of the IT industry, from the 1990’s to 2000’s.  I’m not bashing any of the competition.  Competition is king.  It drives this stuff to not only exist, but to thrive and evolve.  But you have to give credit to them for even bothering with things like MSDN and all the free and accessible features.

So, what do you need to buy?  Windows.  That’s it.  If you have legal copies of Windows installed and are using them in an Active Directory domain, it’s like getting a ticket to see your favorite band play and everyone else that shows up brings your favorite food and wants to share.  Oh, and they have your favorite beer too.  I’m referring to the collective potential of these free capabilities that come with each Windows node, as well as their added capabilities when joined into a directory environment; It’s a player versus a team.

What’s the Catch?

You gotta do some reading.  Oh man.  Bummer.  Too bad you can’t just plug a cable into your head and “ba-zing!” you know it.  Yeah, the Matrix did that already. Someday we’ll have that.

The other catch to all of this is timing.  Or I should say: scheduling.  You need to decide how often you need to get a fresh updated report from each client about their hardware and software.  Err on the side of collecting it as seldom as possible, without sacrificing accuracy.  That threshold varies by organization and business environment.  In any case, I wouldn’t recommend collecting it more than once per day per client.  If your machines are changing that much, that frequently, you have bigger issues to deal with.

Okay, back on track: Let’s say you want clients to report in once per week.  Then you have to decide what they should report.  What specific aspects of their hardware and software do you really want and need?  Keep in mind that the more you collect, the longer it takes,  the more overhead it puts on local resources (CPU, memory, disk I/O, etc.).  Also, if you have a lot of machines collecting a lot of inventory data and reporting it at one time, you may have upstream issues to consider: Network I/O, and NIC traffic at the server.  This brings us to the shell game.

To play it safe, start with a basic set of properties and add more later on.  The basics might include the computer name, hardware model, CPU (model or caption), and memory (total physical), and disk information (size, free space), and software aspects like operating system, service pack (csdversion), and BIOS information (asset tag, serial number).

That might seem like a lot, but if you’ve ever opened up SQL Management Studio and poked your eyeballs around the tables and views in a SCCM site database, you should realize that this set above is a VERY small set of properties to collect.

The Shell Game

Building a tiered architecture is somewhat like being an air traffic controller.  You have to keep your eyes, ears and mind focused intently on many things at once, in order to make sure they all work together.  If you don’t, the many things will often work against one another and pretty soon you’re looking for a new job.

You need to think about what happens on each client, as well as on the server.  Then there’s the network fabric in between to consider.  And maintenance windows, and scheduled outages, and peak user demand periods, and so on.  Ignoring any one of these can result in major headaches later on.

This isn’t really brain surgery or rocket science.  It just takes a little math and making sure you size the roads to handle the right number of cars.  Think of it like a tree with roots.  The roots gather water and nutrients from beneath the surface and channel it up into the trunk and out to the limbs.  The leaves also gather resources (sunlight, oxygen, CO2) and channel that inward.

The clients in your fictional architecture are the tips of the roots.  The root branches are the network circuits.  The trunk is the server, where everything gathers and is stored.  The leaves would be other servers in the site that interact and also feed information in (status reports, inventory data), and disseminate information (policies, content) outward.

How’s that analogy for not having any coffee for the past 12 hours?  I’d call it amazing.  Then again, for me, making a sentence with more than three words without coffee is astounding.

So, what now?

Boiling this down into a soup you can eat with a spoon: here’s just ONE way you could approach building this contraption:

Clients can be made with script code.  You can start with something as simple as Rob Van der Woude’s WMI Code Generator and glue it together using a code editor.  Test it out until you get the results you want from a test computer.

You can run the clients using the built-in Task Scheduler.  Add some uploading features, so the results get pushed up to a central share.

Add this to a few other computers and watch how the process takes shape.  Watch the network traffic overhead, the switches and NICs on the server as well.  This is important even with 1 to 3 test computers, as you can measure the scale of impact from 1, to 2, to 3 and be able to predict the impact at 50, 100, 500 and 5000 clients.

You can push the data as files to a share or directly into a database.  Either way will work, so which of these two paths you choose is really a matter of comfort and preference.  Depending upon your scale, I’d recommend the latter.

Inside the database add some data import and manipulation (to clean up oddball or mis-matched data).  If you’re comfortable with database development, start normalizing the shit out of the table structures and add some views.  This is obviously a big project and like a house, you can spend a lot of time detailing in many areas.  Don’t laugh at normalization.  Ignoring it is like ignoring a cancer screening.

Side Story:  I was recently asked to develop an application-layer data interface between SCCM, AD and a well-known inventory and service ticket management system (name withheld to protect the idiots that own them).  While this unnamed product has some amazing capabilities, nobody bothered normalizing the database.  When I looked around inside it, it was like opening the hood of a big SUV and finding a bundle of tiny treadmills with hungry squirrels trying to power it while killing each other for the same nuts.  Horrible work.  So bad in fact, that it’s frighteningly easy to crash the entire system by doing a fairly routine data import.  Too many locks and points of contention over too few blocks of data.

Going Further

You can obviously skip the file-based data aspects and go directly from client to database using ADO or ADO.NET.  You can try different scripting languages (vbscript/COM, PowerShell, Python, etc.) and see how that works in your environment as well.

Once you’re erector set is humming along nicely, and you’ve spent some time making the database end really shine, it’s time to consider the user aspect:  What will the customers get from this?  What reports? What administrative controls?  Think about how they should (or would like to) access this new toy.  As a desktop application? A mobile application?  A web interface?

Once you have a well-defined database platform with a decent amount of data digesting inside, you’re ready to build a face on this gold mine.  It’s like building the most incredible house from the inside out.  Now it’s time to think about the front door, the walkways, and the landscaping.

Bust out Visual Studio and build some web reports.  Some apps.  Consumable web services.  Whatever.  The sky’s your limit.  Okay, I need to slow down.  Too much caffeine.

All you need to make this work:

  • Some script code
  • A scheduled task
  • A central share (or)
  • A central database

The script code needs to query for specific data and send it somewhere.  A file, a database table, whatever.  The file format you prefer doesn’t matter as long as it’s easy to process for database imports later on.  XML works great, but a well-formatted text file (CSV, INI or column-delimited) can also work.  Here’s an example XML configuration guide and VBScript inventory reporter that consumes it (don’t worry, they’re actually PDF files with the code pasted in)…

inventory.xml  inventory.vbs

The scheduled task just fires off the script on a regular interval and runs it with the appropriate credentials to do what it needs to do.  I prefer the local SYSTEM account, but that’s not a requirement.  Once you get it tweaked, you can export it and import it on other computers.  Remember to consider staggering the times.  You can also deploy scheduled tasks using Group Policy Preferences (hint, hint).

The central share is helpful for uploading the inventory data (files) from all the clients.    Keep in mind that running the task using the local SYSTEM account means you will need to allow “Domain Computers” to have change/modify rights to remote shares in order for content to be uploaded.  You may still want a central share even if you opt for a pure client-to-database upload model.  That’s so you can post content for clients to download (more on that in part 2 of this article).

The database is where the information should end up.  Because, let’s face it: a database kicks the crap out of files when it comes to searching, sorting, grouping, filtering and so on.  It also makes for a fantastic platform on which to build custom application interfaces.  If you don’t expect to exceed the limits of SQL Server Express, you can use that and it won’t cost you a dime.  You could also use MySQL or PostgreSQL if you’re comfortable with open source options.

The script can bypass a file output and pump the results directly to a database as well.  Just keep in mind the I/O impacts and how you handle the scheduling.

Bonus/Epilogue: inventory2.vbs (modified VBscript embedded in PDF which provides a crude example of pumping queried data directly into a remote database table)

Next time:  The Software Deployment Aspects

Advertisements

One thought on “Sticks n Mud: Building Your Own SCCM (part 1)

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s