A while back, I posted an article about a “Building your own SCCM” kind of project.  And I roughed out some of the caveats, approaches, challenges, and oddities.  I’ve received a few e-mails asking for more detail in some areas.  So…

gdocs_slide1

Client Inventory Data Collection – Part 1

There are two parts to this scheme:  Querying for client data and collecting it, and Uploading it into a central repository.  To be more precise, the two parts provide the following sequence of events:

  1. Query for data
  2. Export data to a local store (for offline clients)
  3. Upload data to central store (when possible)
  4. Consolidate and Organize the central data
  5. Generate Reports (as needed)

Part 1 covers steps 1 through 3 above.  Part 2 of this “series” will cover steps 4 and 5.  Drink up. Smoke up. Shut up. and let’s get started…

Before You Begin

You’ll need the following things in place and ready to go…

  • Administrator rights to a computer running Windows 7 or later (yes, Windows 10 preview will do just fine)
  • A text editor (Windows Notepad will work)
  • About 30 minutes of being bored out of your skull
  • Some coffee

Collecting Data

On a Windows client it’s not that difficult, thanks to Microsoft’s generosity with regards to API tools.  In particular WMI and many other tools, such as built-in scripting, data connectors, services, and networking coolness.  All included with your license fee, of course.

You can roll this burrito any one of a dozen ways and still get the job done.  After 30 years of smacking keyboards and cursing a lot, I’ve tried to lean back and take a longer view.  Always think ahead as to what might be needed later on.  It makes things WAY easier to incorporate when that time comes.

In this case, I opted to separate the instruction parameters and the execution code.  In English: I have a script (execution code) and a few XML data files (instruction parameters).  The XML file is where I can customize “what” I want to collect.  The script reads that, and like a hound dog sniffing some musty old socks, it charges off in search of who owns them (the data).

dpms_client_folder

In the figure above, you’ll see the “dpms_inventory.vbs” (vbscript) file, and a bunch of XML files.  The three named “dpms_client_inv…” are the instruction files.  The three named “DT3…” are the inventory output files (my desktop is named “DT3”, not very interesting).  The three “Client …. Inventory” files are exported Task Scheduler jobs.  I’ll provide a link to all of this junk later in the article, so relax.

In all, the only important files are the following four (4):

  • dpms_inventory.vbs
  • dpms_client_inv_hw.xml
  • dpms_client_inv_sw.xml
  • dpms_client_inv_sys.xml

The three (3) Scheduled Task files are not necessary.  I created the jobs manually in the GUI form, and then exported them.  The exported XML files are easier to import onto other computers, especially when you’re dealing with dozens or hundreds of remote computers (if you’re in the thousands range, stop right here and go buy System Center 2012).

Inside the “dpms_client_inv_hw.xml” instruction file, you’ll find a list of which WMI classes I want to query for data.  The node tree is basic (e.g. “inventory\classes\class”).  It looks like this…

hw_inv_xml

Obvious, like me, it’s not very sophisticated.  Anyhow, the script reads this, begins convulsing and heaving, like a cat after eating spoiled tuna, and pukes up a fur ball something like this…

inv_data

(At least a fur ball is easier to clean up) Some of the [value] attributes shown above are empty because I scrubbed the results to protect the innocent.  Some are modified because of some minor changes to WMI in Windows 10 preview build 9879.  But whatever.

The point should be pretty easy to grasp by this point:  The script reads an XML file to get the signal for the next play, and breaks the huddle to begin scrimmage.  So I add two more instruction sets for Software (SW) and System data (SYS), making three instruction files:  Hardware, Software and System.  But now what?

Automating

This would be fine for running interactively on a single computer.  But what about on large numbers of computers?  One solution is to use a Scheduled Task.

dpms_schedule

I created a folder for DPMS (not a requirement, but I like to group my custom jobs into unique folders so I can find them more easily).  You can create Scheduled Jobs any way you prefer: GUI, command line (schtasks.exe), Group Policy Preferences, scripting, compiled programs, carrier pigeon, smoke signals, whatever.  As long as it makes a decent result, it’s fine.

I created three separate tasks to break up the inventory cycles for more flexibility.  You do not need to do this, it’s just an example.  For each scheduled task (okay, “Job”), I set it to run on a recurring schedule, calling the script on the local disk, and run it using the local SYSTEM account.

Why?

The schedule can be modified or set to trigger on events such as “On Logon” or “On Startup”, etc.  Go nuts.  The SYSTEM account has sufficient access rights to grab all the data I need and it doesn’t require any password management.  The script is kept local so that it can continue to run in the background on laptops and tablets while not connected to the Death Star corporate network.

Hopefully that makes sense.

So, what next?  The script is kicked off three times, using three distinct parameters to crank out three distinct inventory report files.  The next step is to upload them.  This is where the final step in the script attempts to upload the files to a designated share.

For this example, and mostly to prove it can be done, I am running the tasks on a workgroup computer and the logs are uploaded to a shared folder on a domain server.  There is no trust configured and no proxy account.  I simply configure the permissions (NTFS and SACL) on the server folder to lock it down enough to allow computers to upload.  Human trust factor can never be ignored.  (being able to invoke local SYSTEM context requires local admin privs and that means someone gave the user local admin privs, or used a weak password, so….)

NOTE:  This mix of domain and workgroup voodoo is not a recommended configuration.  It is always best to configure things like this within the context of a managed AD domain environment, where all moving parts are members of the same (or a trusted) AD namespace.  Okay, enough techie mumbo-jumbo, let’s move on…

Files versus Direct Data

It often comes up in nerd circles: which is “better” to use for collecting aggregate data from many sources into one destination?  Files or database connections?  It depends.  There are many variables and resource saturation mitigation factors (how you like them big words?  huh?  huh? yeah!).

  • You can stagger scheduling on the sources to spread the traffic load.
  • You can spread the destination targets for collection dampening (think of it like stepping onto muddy ground with big snow shoes instead of high heels).
  • You can layer the upload and import processes into tiers for phased execution.
  • You can apply compression and normalization techniques to squeeze the data into more efficient chunks.
  • and so on.

The road blocks will always be I/O:  Network Interfaces.  Network circuits.  Disks.  CPU and Memory allocations.  But let’s be serious, the real obstacles are always about budget. With enough budget, these “obstacles” change from brick walls into wet paper bags.

The end result, should always be a database.  A relational database is to data management what politicians are to money.  They just know how to work it.  The real performance and efficiency gains will be in and around the database end of this.

Wrapping Up

Now that you have a script, some instruction files, and some output inventory data, and the inventory data files have been copied up to a central share, you have all the pickings of the crop in one basket.  Time for dinner.

Click the link below to download the .ZIP file containing the four files mentioned earlier.   After extracting the files, open and read the “readme.txt” file for additional information.

dlbutton

Next Up:  Part 2 – Importing Data files into SQL Server and Making some Web Reports.  For no additional cost.

Advertisements

One thought on “Project: Computer Inventory Collection (Part 1)

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s