System Center, Technology

Cranky Dave sez Knock that Shit Off

Maybe it’s because I had a string of calls which rubbed the same raw nerve endings too close together. Idk. But I feel like the message isn’t getting out there. Please, if you work with customers, and see these things, urge the living shit out of them to reconsider. If that doesn’t work, duct tape and white vans are still available.

  • Doing things “just because” it’s been that way since 2010
  • Giving out local admin rights like candy. If you want to be drug dealer, run for Congress or Senate.
  • Using login scripts as first response to every need.
  • Repackaging every app installer, even when it only needs “/S” to run quiet
  • Stop over-complicating things just to be cute/clever. Look at me virtualizing my virtual servers inside another virtual server with 3 databases, and I only have 100 devices to manage.
  • Read the #(*$&@(*#$&(_(*@(#*$&) docs and follow the “supported” terms. Stop assuming you’re smarter than a room full of MVPs, *and* a yacht filled with drunk attorneys who all graduated from Harvard.
  • If your environment has to be complicated, it’s most likely because your business is over-complicated, and possibly broken. (Paul’s rule: If you automate a broken process, you will only ever get an automated broken process).
  • Stay within support – Don’t cry for help with your SQL 2005 “mission-critical” database. If it was “mission-critical” it would be running on a supported version.
  • Keep your shit patched – SQL 2016 is nice, but RTM is like 2 service packs and 12 cumulative updates behind! This isn’t 1962. We finished the Moon program a while ago.
  • Do I sound cranky? I’m grabbing a Snickers.

databases, Scripting, System Center, Technology

Basic ConfigMgr HealthChecks using PowerShell

Image result for prostate exam

I’m long overdue for a deep-dive (pardon the pun), so drink-up and let’s move on…

The power provided by PowerShell comes from a lot of different directions, one of them being that you can leverage a ton of built-in functionality without having buy additional software licensing, or even write all the messy code. That’s right, once again, I’m on my “modules are freaking fantabulously increditastical” soap box. I’ll be using a few different modules to do the heavy lifting:

And even though I won’t be showcasing it in this post, if you wish to export anything to Excel, rather than hopping through CSV first, take a look at the module ImportExcel by Doug Finke (the Export-Excel function in particular).

Heads-Up: This is not intended to be a “solution” that you simply download and run. I prefer to share the basic pieces and a direction, and let you take it and run with it however (and wherever) you like. Sharing a fully-bolted, polished solution doesn’t leave you with room to explore and customize without a lot of reverse engineering. Here’s the bricks, have fun building.

If you’re wondering why I’m not covering CMHealthCheck, it’s because (A) it would violate the “heads-up” goal mentioned above, and (B) that module is becoming a bit dated anyway (I’m working on a replacement, feedback is always welcome).

And Now for a Word on Modules

I’ve been in a few discussions about “make vs. buy” or “build vs. borrow” view of scripting. For decades (yes, I’m that freaking old, so you’ll have to speak up), I had always leaned towards building everything. Even when finding a near-perfect match online, I would rewrite it to my tastes. Not anymore. Time is more precious, and I’m not too proud to accept someone else might have provided a better option than I would have built.

In 2020, the state of online sharing is 1000 times what it was 10 years ago. It’s now to the point where not finding a close match for a desired technical solution is the exception, rather than the norm. Only the newest emerging things are lagging behind, mostly due to the trend of over-stimulated coke-snorting CI/CD fanaticism, but I’ll leave that for another episode of “Old man says GTFO my lawn you little CI/CD pipeline bastards!” But, I digress.

To me, modules are like car parts. Even when you build, or restore, a car, you’re not likely going to make EVERY single part from scratch (unless you own a smelting factory, chrome dip tank, a cow farm for leather, and so on). Most components are built by someone else. So, building things from parts is just a natural thing to me. It’s organic. Okay, soap box session is done. Let’s move on.

Getting Things Ready

To perform almost any health assessments, you’ll need sufficient access to the resources. In a typical ConfigMgr environment (if there is a typical ConfigMgr environment), this will translate into:

  • Full Administrator (in ConfigMgr)
  • ServerAdmin (in the SQL instance)
  • Local Administrator (on the site servers)

These are often granted to the account which was used to install the Configuration Manager site. Hopefully, it’s not an actual “user” account (that a human logs in with every day), but a service-type account. If you are not a DBA (or the DBA-ish person who “owns” the SQL instance) you may want to confer with them first, to make sure you don’t step on any toes. Pro-tip: bring doughnuts and fresh jokes.

When I say “Local Administrator”, I don’t mean adding your domain account directly into the local Administrators group, although that does work. It’s recommended that you are a member via domain group membership and that whole AG(U)DLP chain of handling that Microsoft has recommended for decades.

I already mentioned the PowerShell modules I’ll reference, so those need to be installed on your working computer (not on the site servers or database server, unless that’s all you’re working on)

To save on repetitive typing, let’s define some variables to use throughout the following examples. Replace the string values with whatever your TEST LAB environment uses:

$dbhost = "cm01.contoso.local" # site SQL host FQDN
$cmhost = "cm01.contoso.local" # CM primary site host FQDN
$site   = "P01" # CM site code
$cmdb   = "CM_P01" # CM site database

MECM Site Information

To help with finding things in SQL, mainly the default Views, I recommend running the following snippet, so you can use the results to search more easily:

$dbviews = Get-DbaDbView -SqlInstance $dbhost -Database $cmdb -ExcludeSystemView

An example for finding views which relate to something like “site”…

$dbviews | Where {$_.Name -match 'site'} | select name

You can also pass this into a cheap GridView (only $0.99 while supplies last) to pick-and-run your favorite view…

$view = $dbviews | Out-GridView -Title "Pick a View to Query" -OutputMode Single
if (![string]::IsNullOrEmpty($view)) {
  $query = "select * from $view"
  Invoke-DbaQuery -SqlInstance $dbhost -Database $cmdb -Query $query
}

I have a slightly more fancy version of the above sample, as a function, up on my GitHub at http://bit.ly/2SYYOOL. You can load it directly into a console session, and run it, using Invoke-Expression…

iex (New-Object System.Net.WebClient).DownloadString('http://bit.ly/2SYYOOL')
Invoke-CmDbView -SqlInstance $dbhost -Database $cmdb

Site Information Summary

Invoke-DbaQuery -SqlInstance $dbhost -Database $cmdb -Query "select * from v_Site"

General Client Information

I recommend saving the output of the following script to a variable, for use as a baseline for other operations (rather than requesting new data for each sub-query). I’m using $cmdevices for this example…

$cmdevices = Invoke-DbaQuery -SqlInstance $dbhost -Database $cmdb -Query "select * from v_CombinedDeviceResources where (name not like '%unknown%') and (name not like '%provisioning device%') order by name" | 
select Name,MachineID,SerialNumber,MACAddress,DeviceOS,DeviceOSBuild,CoManaged,ClientVersion,IsVirtualMachine,ADSiteName,LastMPServerName,LastPolicyRequest,LastDDR,LastHardwareScan,LastSoftwareScan,LastActiveTime,LastClientCheckTime,ClientCheckPass

From this you can filter on things like the following examples.

Devices with Old or Missing Hardware Inventory

Find devices which haven’t reported hardware inventory yet…

$cmdevices | Where {[string]::IsNullOrEmpty($_.LastHardwareScan)}

Find devices which have reported hardware inventory in the past, but not with the past 30 days…

$cmdevices | Where {(-not[string]::IsNullOrEmpty($_.LastHardwareScan)) -and ((New-TimeSpan -Start $_.LastHardwareScan -End (Get-Date)).Days -gt 30)}

Compare Device Coverage with AD

$adComps = Get-ADComputer -Filter * -Properties lastlogontimestamp,whenCreated,operatingsystem,description

I included some additional attributes in case I want to also compare last-login dates, and so on. But anyhow, to use this to compare devices between AD and MEM, you can run some super-basic tests like this…

$adComps | Where {$_.Name -notin $cmdevices} | select Name
$cmdevices | Where {$_.Name -notin $adComps} | select Name

The example above shows I have more devices in Active Directory which are not in the ConfigMgr database, than I have devices in ConfigMgr which are not in Active Directory. What kind of “health” is this? It’s a measure of how clean and controlled your environment really is.

General Windows Host Information

Get-CimInstance -ClassName "Win32_ComputerSystem" -ComputerName $cmhost
Get-CimInstance -ClassName "Win32_OperatingSystem" -ComputerName $cmhost
Get-CimInstance -ClassName "Win32_SystemEnclosure" -ComputerName $cmhost
Get-CimInstance -ClassName "Win32_Product" -ComputerName $cmhost
Get-CimInstance -ClassName "Win32_BIOS" -ComputerName $cmhost

Disks and Disk Space

Get-CimInstance -ClassName "Win32_LogicalDisk" -ComputerName $cmhost | % {
  [pscustomobject]@{ 
    Drive  = $_.DeviceID
    Name   = $_.VolumeName
    SizeGB = [math]::Round(($_.Size / 1GB),2)
    FreeSpaceGB = [math]::Round(($_.FreeSpace / 1GB),2)
    Used   = [math]::Round($_.FreeSpace / $_.Size, 2)
  }
}

Network Connection Properties

Get-CimInstance -ClassName "Win32_NetworkAdapterConfiguration" -ComputerName $cmhost | 
    Where {$_.IPEnabled -eq $True} | 
        Select IPAddress,DefaultIPGateway,IPSubnet,MACAddress,DNSServerSearchOrder,DNSDomainSuffixSearchOrder | ForEach-Object {
            [pscustomobject]@{
                IPAddress   = $_.IPAddress -join ','
                IPGateway   = $_.DefaultIPGateway -join ','
                IPSubnet    = $_.IPSubnet -join ','
                MACAddress  = $_.MACAddress
                DNSServers  = $_.DNSServerSearchOrder -join ','
                DNSSuffixes = $_.DNSDomainSuffixSearchOrder -join ','
            }
        }

File Shares

Get the file shares, folder and share permissions. This information can be used to further automate for “drift” reporting and remediation, when someone (or some process) modifies them for whatever reason. (Note: The following example has no exception handling. You may want to add some nested try/catch handling inside the foreach-object (%) section.)

$shares = Get-CimInstance -ClassName "Win32_Share" -ComputerName $cmhost | 
  where {$_.Name -ne 'IPC$'} | % { 
    $spath = "\\$cmhost\$($_.Name)"
    $fpath = "\\$cmhost\$($_.Path -replace ':','$')"
    $perms1 = Get-CPermission -Path $spath
    $perms2 = Get-CPermission -Path $fpath
    [pscustomobject]@{
      Name = $spath
      Path = $_.Path
      Description = $_.Description
      SharePermissions = $perms1
      FilePermissions = $perms2
    }
  }

Stopped or Failed Services

Another common check is looking for services which are set to “automatic” but are not currently running…

Get-CimInstance -ClassName Win32_Service -ComputerName $cmhost |
  Where {$_.StartMode -eq 'Auto' -and $_.State -ne 'Running'}

Ooooh. Missing Updates?

What about those pesky Windows updates on your site systems? Yeah, they need them. And SQL Server updates too.

Get-WindowsUpdate -ComputerName $cmhost -WindowsUpdate
# note: if the -ComputerName connection fails, try using Enter-PSSession instead

Event Logs

The Windows Event Log is a gold mine for finding current and potential issues with a Windows Server.

# system log "critical","warning" and "error" entries in the last 24 hours...
$xfilter = @'
<QueryList>
  <Query Id="0" Path="System">
    <Select Path="System">*[System[(Level=1 or Level=2 or Level=3) and TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select>
  </Query>
</QueryList>
'@
$sysEvents = Get-WinEvent -LogName "System" -ComputerName $cmhost -FilterXPath $xfilter

# application log "critical","warning" and "error" entries in the last 24 hours...
$xfilter = @'
<QueryList>
  <Query Id="0" Path="Application">
    <Select Path="Application">*[System[(Level=1  or Level=2 or Level=3) and TimeCreated[timediff(@SystemTime) &lt;= 86400000]]]</Select>
  </Query>
</QueryList>
'@
$appEvents = Get-WinEvent -LogName "Application" -ComputerName $cmhost -FilterXPath $xfilter

ConfigMgr Server Logs

Oh boy. This part isn’t fun. You can search the server and component status events within the site database, which is often faster, but I’ll save that for another post.

For this one, I borrowed a very nice script by Adam Bertram, aka “Adam the Automator” (Twitter: @adbertram) and modified it slightly (okay, I poured dumb sauce all over it) to read server logs instead of client logs. I realize that some logs don’t follow a consistent internal format, so if you know of a better alternative, please chime in?

iex (New-Object System.Net.WebClient).DownloadString('http://bit.ly/2vIhtXk')
$logs = ('sitecomp','dataldr','hman','distmgr','smsexec','wsyncmgr')
$logs | ? {
  Get-CmServerLog -ComputerName $cmhost -SiteCode $site -LogName $_ | ? {$_.Category -eq 'Error'}
}

Database Information

A Configuration Manager site isn’t much good without a SQL Server database. And a SQL Server database isn’t much good if it’s suffering from issues resulting from mis-configuration, neglect of maintenance and updates, and so on. So any real “health check” of a system implies checking all the parts which it depends on, which in this case is the site database.

SQL Instance Summary

This will return basic version and update information, such as version, build, service pack, cumulative update and KB levels, and support status.

Get-DbaBuildReference -SqlInstance $dbhost

Getting SQL Server update compliance can be tricky. At least it has been for me, and probably because I’m retarded AF. But if you find it tricky too, then maybe it’s from something else, but anyhow, here’s one way…

# import a magical function from the world of beyond...
Test-DbaBuild -SqlInstance $dbhost -Latest

CM Database Summary

This will return a summary of your SQL database, such as name, instance, status, recovery model, compatibility level, collation, owner, and basic backup info.

Get-DbaDatabase -SqlInstance $dbhost -Database $cmdb

Connection Authentication Scheme

Test-DbaConnectionAuthScheme -SqlInstance $dbhost

SQL Instance Memory Allocation

This will return summary information about the current maximum memory limit, and current usage for the instance (in megabytes).

Get-DbaMaxMemory -SqlInstance $dbhost

You can also retrieve current memory usage stats…

Get-DbaMemoryUsage -ComputerName $dbhost

Database File Information

This will return details about each .mdf and .ldf file for your CM database, such as path, size, status, reads/writes, and more.

$dbfiles = Get-DbaDbFile -SqlInstance $dbhost -Database $cmdb

Database File Auto-Growth Information

This is basically an extension of the example above, which dives more into the auto-growth aspects.

$dbfiles | select LogicalName,Size,Growth,GrowthType,UsedSpace,NextGrowthEventSize,TypeDescription

Database Index Fragmentation

This will return the current fragementation state of your database indexes (indices?) I prefer to break this into two (2) parts: a query file, and the script code. The query file contains only the SQL statement, which the script code imports using the -File parameter. The first example below is the SQL statement, followed by the PowerShell script.

SELECT 
  dbschemas.[name] as 'Schema',
  dbtables.[name] as 'Table',
  dbindexes.[name] as 'Index',
  indexstats.avg_fragmentation_in_percent as 'FragPct',
  indexstats.page_count as 'PageCount' 
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
  INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
  INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
  INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
  AND indexstats.index_id = dbindexes.index_id
WHERE 
  indexstats.database_id = DB_ID()
ORDER BY 
  indexstats.avg_fragmentation_in_percent desc
$qfile = "[ENTER_THE_SCRIPT_PATH_HERE]\Documents\indexfrag.sql"
$threshold = 40 # index frag percent baseline, whatever you prefer
$stats = Invoke-DbaQuery -SqlInstance $dbhost -Database $cmdb -File $qfile
[math]::Round(($stats | where {$_.FragPct -gt $threshold}).Count / $stats.count, 2)

Failed SQL Agent Jobs (last 24 hours)

Get-DbaAgentJobHistory -SqlInstance $dbhost -StartDate (Get-Date).AddHours(-24) | Where {$_.Status -ne "Succeeded"}

Database Backup History

Get-DbaDbBackupHistory -SqlInstance $dbhost

Conclusion

I think I’ve talked enough for now, and I’m out of coffee. As I mentioned earlier (I think), this is only a sampling of some of the things you can bolt together using off-the-shelf modules, and some minimal touch-up work.

As the MECM or MEM/CM team adds more to the Management Insights library of tools, you can expect to peel off a few custom tools, but that may be a gradual process. Keep an eye on this feature with each new build that you install.

This isn’t at all restricted to MEM/CM/ConfigMgr, or even SQL Server (even though I spent a lot on this).

Now, put on your best pair of Latex gloves, and smile. 🙂

databases, Projects, Scripting, System Center, Technology

Cool SQL Tricks with DbaTools and MEM ConfigMgr

If you’ve ever wanted to export information from a Configuration Manager site database, you’ve probably found many different ways to do it. Sometimes the management console provides an easy method, sometimes not. Sometimes the SMS Provider (WMI) makes it easy, sometimes not (think set-join, operations for example). Then there’s SQL queries.

One of the best features of SQL, as a language, is the ability to perform dataset joins, or combining results from multiple source tables/views/functions as if the source data was all in one place (it kind of is, but kind of isn’t). Then comes the “how do I run a query against the database from PowerShell?” question.

A lot of examples use .NET ADO methods like connections, datareaders, data adapters, and so on. Which is fine, but yuck! That’s a lot of messy code and I just took a shower. I’m lazy, and I hate writing more lines of code than I really need. Enter PowerShell modules.

Modules are basically toolboxes, filled with tools to help with certain tasks, by taking care of the messy or complicated stuff for you in the background.

If you’ve ever used PowerShell, you’ve used modules. Just type Get-Module and press Enter to see the ones currently loaded in memory. To see all of the modules installed add -ListAvailable after it (e.g. Get-Module -ListAvailable).

One module I absolutely love is dbatools. It’s a toolbox of functions for dealing with SQL Server from inside and outside. By “inside” I mean interacting with tables, views, functions, agent jobs, and so forth. By “outside” I mean server configuration, statistics, backups, processes, memory allocation, version and build info, and much more. It really is amazing. (follow @psdbatools on Twitter for updates and tips)

Install-Module dbatools

Then type “Get-Command -Module dbatools” and lean closer to your screen. Slide a plate under your mouth to catch the drool.

I use a lot of functions in that module, and even so, only about 10% of the overall list. That’s still enough for my needs, and when new needs arise, it’s usually ready for me. For a full list of commands – click here. (note: if it’s missing something, you can suggest it, or contribute via their GitHub site).

Invoke-DbaQuery

The Invoke-DbaQuery function submits a query to a specified database from either a file or text. To use a file, specify the full path/name using the -File parameter. To use text, specify the -Query parameter. Here’s an example for showing all the Collections, along with their ID and membership count…

Note that the -SqlInstance parameter refers to the SQL server hostname (and instance name, if not referencing the “default” instance), and the -Database parameter is the name of the database (I’m not being snarky, I really meant that).

$query = "select CollectionId,Name,MemberCount from v_Collection order by Name"
Invoke-DbaQuery -SqlInstance "cm01.contoso.local" -Database "cm_p01" -Query $query

Before you get too excited, and start getting twitchy fingers, please heed the following recommendation:

  • Do not EVER submit changes directly to the database for a ConfigMgr site. No writing or deleting, renaming, altering of any kind. Just read operations.
  • Limit your read operations to just what you need, and keep in mind that every request you submit adds to the overhead it is already dealing with.
  • TEST EVERYTHING IN AN ISOLATED, NON-PRODUCTION LAB ENVIRONMENT THOROUGHLY BEFORE YOU EVER THINK ABOUT TRYING IT IN PRODUCTION (unless you win the PowerBall and just don’t care, but don’t forget me?)

WMI vs SQL vs REST (Admin Service)

Some of you may ask about the differences between SQL, WMI and REST when it comes to reading data. WMI (SMS Provider) has been the go-to since, well, forever. The problems with WMI for intensive data operations are:

  • WMI is slower than SQL. Typically a lot slower. Compared with SQL queries, it’s like listening to Mitch McConnell talk after he’s had 4 martini’s, and then listening to John Moschitta Jr. talk after he’s had 4 cans of Red Bull. And WMI queries (WQL) don’t support compound join operations.
  • The Admin Service (REST API) is newer, and will eventually replace the SMS Provider. It provides a robust channel using a web service, which, like the SMS Provider, acts as a broker between the requestor and the actual SQL database. If you’ve used the Modern Driver Management solution from SCConfigMgr.com, the task sequence step which runs a PowerShell script is using a custom web service which provides the same function (only at a smaller scale)
  • SQL is by far the fastest for returning large datasets, and excels at complex join operations. But it’s also tapping into the very heart of ConfigMgr and should be done carefully.

The Admin Service is probably going to become the go-to path for interacting, programmatically, with MEM/CM databases, and it is very cool indeed. But for me, there will always be a soft spot for using SQL. So don’t take this as a “you should use SQL always for everything” suggestion. It’s really just ONE of several ways you can get data out of the site.

For more on the Admin Service, read Sandy‘s article here, and Adam Gross‘s article here.

Back to SQL for a moment… Another example, this one for querying the general MEM/CM site information:

$query = "select * from v_Site"
Invoke-DbaQuery -SqlInstance <server\instance> -Database <name> -Query $query

As of Tech Preview build (5.00.8946.1000) there are 1757 non-system views in the CM_xxx database schema. Sometimes it can be a challenge to search (or filter) within SSMS to find one you need. PowerShell can make it a little easier, especially with Get-DbaDbView.

$views = Get-DbaDbView -SqlInstance "cm01" -Database "cm_p01" -ExcludeSystemView | select Name | sort Name

Because this is often a slow return, I recommend saving the output to a variable (as shown above). This makes it less painful to filter/sort than sending a new request.

Getting more serious, like putting-my-pants-on serious, let’s check the database index fragmentation status. For this, since the query is a bit longer, I will dump it in a file and save it as “indexfrag.sql” (shown below) and use the -File parameter to submit the contents to the SQL instance. I also want to save the output to a variable, so I can perform some filtering and sorting:

SELECT 
  dbschemas.[name] as 'Schema',
  dbtables.[name] as 'Table',
  dbindexes.[name] as 'Index',
  indexstats.avg_fragmentation_in_percent as 'FragPct',
  indexstats.page_count as 'PageCount' 
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
  INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
  INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
  INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
  AND indexstats.index_id = dbindexes.index_id
WHERE 
  indexstats.database_id = DB_ID()
ORDER BY 
  indexstats.avg_fragmentation_in_percent desc
$stats = Invoke-DbaQuery -SqlInstance "cm01" -Database "cm_p01" -File "C:\users\clueless\Documents\indexfrag.sql"

Now I can filter the results to see how bad my fragmentation really is:

[math]::Round(($stats | where {$_.FragPct -gt 40}).Count / $stats.Count, 2)

My lab database doesn’t seem too bad (0.03% of indices are more than 40% fragmented), but that’s because I use Ola Hallengren‘s DB Maintenance Solution (as explained by Steve Thompson)

By the way, speaking of Ola Hallengren’s solution, dbatools has a built-in function for installing it. Check out Install-DbaMaintenanceSolution. One caveate is that it doesn’t create the nice Maintenance Plan that Steve describes, and some of the jobs aren’t configured with all the options (e.g. IndexOptimize – USER_DATABASES).

To list the Agent jobs, use Get-DbaAgentJob:

Get-DbaAgentJob -SqlInstance "cm01" # dumps a lot of details for each job

Gluing things together

These are just a few building blocks, and you can easily start building all sorts of diabolical planet-saving awesomeness to earn your next bonus check. For example:

  • Query for Devices which meet (or don’t meet) a given condition
  • Output to a CSV, XLSX file, or HTML file
  • Attach to an email and send it on a schedule

Aside from basic things like this, you can install the module on ANY machine in your environment which has access to the network, it doesn’t need to be on the SQL host (actually, it’s preferred that you don’t install things on your site servers unless absolutely necessary).

Having dbatools on a workstation, or utility server, allows you to consume data from multiple instances and databases. You can use modules like dbatools for building custom automation processes. For example, you can install it on an Azure Automation Hybrid Worker to leverage it with Runbooks. This is handy for pulling data from disconnected environments (think multiple AD forests without lateral trusts). Even shaping the extracted data for uploading into Log Analytics (now Azure Monitor).

The sky is the limit. The world is your oyster. Now, drink some super-caffeinated stuff and get busy!

Cloud, Devices, Scripting, System Center, Technology, windows

Rant 42 – Getting In Tune with Intune

Keep in mind that everything I’m about to say is within the context of the following:

  • This is entirely focused on managing Windows desktops and laptops. This has nothing to do whatsoever with mobile devices.
  • This is based on querying the Graph API “beta” version, but much of it applies to v1.0 as well
  • I’m more comfortable crawling inside Configuration Manager and SQL Server databases
  • It’s entirely possible that I’m a complete idiot and haven’t the slightest clue what I’m talking about
  • I’m struggling to be funny, but I’m tired and pissed off trying to get inventory data for a customer and keep landing on “if this was ConfigMgr I’d have it in 5 seconds”
  • After three weeks of no beer, wine or anything fun, I just finished two very tasty beers. And now my fingers want to type some shit out.

In the Beginning

(Morgan Freeman voice here) Most of the last 20 years I’ve worked in “IT” has been muddling with computer devices of various kinds; not necessarily from a hardware aspect (calm down), but from a software and infrastructure angle. Deploying, configuring, managing, repairing, replacing, decommissioning. Okay, and breaking, and swearing at. Mostly those which run some flavor of Microsoft Windows.

As such, there are many commonplace scenarios I’ve dealt with, which may seem very familiar to you

  • Inventory
  • Health (prescriptive and reactionary)
  • Disposition
  • Compliance
  • Financial
  • Legal
  • Control

The first three are pretty common everywhere, regardless of what hardware or software brands are used. The fourth is emerging as the top contender. And while financial and legal aspects are important, ask any attorney, it’s the last item, Control, that takes the lead. After all, you can’t really guarantee any of the others without having control in some respect.

After almost 25 years of getting involved with various aspects of inventory management, what I find most interesting is a seemingly disconnected view of Control and Inventory. Many organizations seem to view them as semi-related, or unrelated. But they are actually entirely related. Like politicians and pedophiles, they’re impossible to separate.

You need inventory to gain control, and you need control to gather inventory. And for decades, traditional (on-prem) solutions have evolved and matured to meet the needs of almost every customer in every environment when it comes to inventory and control.

Most modern cloud-based offerings offer a wide variety of control capabilities, but are lacking when it comes to inventory. This is arguably due to having a mobile device focus, at least during their inception. Mobile phones typically don’t incur the same depth of inventory concern as desktop and laptop computers. This is partly due to the platform restrictions and licensing costs for available apps (consider the most expensive phone app vs. the most expensive desktop apps).

The operational aspects are different, as are the lifecycle management aspects. Think of the typical sequence of events in the life of a mobile phone and compare that to a typical laptop or workstation. Before you get your panties tied around your neck in angst, consider deploying most mobile apps with deploying things like ArcGIS, AutoCAD, Inventor, USB-related software, device drivers, language packs, multiple user profiles, and so on. Yeah, they’re not quite the same animals

For decades, Active Directory was the backbone of device inventory management, but in 2020 that’s no longer exclusive. Not that AD was a comprehensive or robust solution in that regard, but that it was a back-plane on which other applications, scripts, databases, and so on were extended. AD was, and still is, a foundation for managing devices (and users and groups). And now there’s Azure AD.

More and more customers, particularly smaller organizations, don’t want any “servers” to manage. Many larger organizations have smaller teams that they’d like to be less encumbered with on-premises infrastructure. They want devices to live in the cloud, and be managed entirely, and exclusively, in the cloud.

The challenge is that cloud solutions were built, initially, to address a different need: mobile devices (mostly phones and tablets). Traditional desktops and laptops have enjoyed decades of maturing management tools. And now the market is pushing one solution to cover a different need.

The more that customers come knocking on our doors with questions about replacing on-prem inventory and management systems with Intune, the more challenges I encounter with gaps in functionality. This is particularly true for inventory management. And since a huge number of customers rely on inventory for a variety of business needs, they demand a solution that provides comprehensive and accurate inventory data.

That said, here’s what I’ve compiled from customer discussions, as well as my own (limited) experiences. Intune management features for Windows computers could use some improvements in at least the following areas:

  • More comprehensive inventory:
    • BIOS information (vendor, version, dates, etc.)
    • Driver information (name, vendor, version, dates, etc.)
    • Applications: Just copy what ConfigMgr gathers, that’ll nail it
    • Custom file inventory (by extension, by wildcard, etc.)
    • User Profiles
    • Windows Services
    • NIC device info, adapter config info (IP addresses, gateway, DNS, etc.)
    • Address some quirks: PhysicalMemoryInBytes always shows zero (0), ethernetMacAddress is empty for almost all devices, and chassisTypes shows “unknown”
    • Windows Events (filtering, reporting)
  • Win32 application deployments
    • More robust and simplified troubleshooting (log access/viewing, custom logs)
    • More robust lifecycle management: upgrades, complex batch execution (task sequences), and so on.
  • Patching could be better, particularly having the ability to decline/reject specific updates
  • Run PowerShell scripts on a schedule, view/edit scripts in the portal, and view a history of all past-deployed PowerShell scripts per device
  • I would add REST API (Graph) changes as well, but these would depend on the preceding suggestions
  • I realize some of this is possible with things like ATP and other premium tools, but as a base product, it needs more from the start.
  • I could go on, but I’ll just point you to UserVoice

Hallucinatory Thoughts

Related image

Ever since Microsoft made a bold push for Intune, and the “cloud first, mobile first” motto was announced, the ConfigMgr masses reacted quickly with a message that “ConfigMgr is dead”. I don’t think Microsoft really intended (or expected) that would grow so quickly.

The dilemma this likely caused was two-fold: Internal and External

First, the ConfigMgr team was pushing on with an enormous backlog of new features, and fixes; investing heavily in beefing up ConfigMgr features. Allowing the public perception to go unchecked could risk impacting sales and revenue, and impacting team morale at a critical time. No one wants to work on a project with a doomed future. Valuable people would leave, and potentially-valuable people would avoid it entirely.

Second, imagine you’re a business looking for direction on how to manage (mostly Windows-based) devices on-prem and out in the wild. You have an older version of ConfigMgr, and are struggling to decide whether it’s worth upgrading, or finding a new solution. A pervasive message of a “dead product” would almost certainly steer you towards another solution. Again, impacting sales revenue and so on.

To avoid both risks, Microsoft had to shore-up the message that there’s a future, and it was both products combined as a unified solution. And rightly so. It makes perfect sense. They’ve admitted this is a journey and worth sticking to if you want to reap the benefits of a superior solution at a better (more affordable) cost.

However…

There’s still that group of customers who weren’t using ConfigMgr to begin with. They’re using <insert any name of inexpensive product> to manage inventory and reporting, software deployments, patch management, and so on. I’ll paraphrase a quote from one of my customers, but it matches most customers in a similar situation:

“When you compare Intune with an on-prem solution, like PDQ or LAN Sweeper, as limited as they are compared with ConfigMgr, they still outshine Intune for managing of, and reporting on, Windows desktops and laptops. And we already own them. We need a compelling reason to drop this and pay subscription for something new. It’s just not there yet.”

To be honest, most of them I’ve spoken with would love to switch to Intune. The notion of eliminating on-prem infrastructure, and using a web browser from anywhere, is very appealing. They’re not saying “no”, but “not yet”. They’re keeping an eye on it, and many have trial accounts or smaller paid subscriptions, to continue testing and learning, and comparing.

But Why?

Related image

I really don’t know why Intune hasn’t pushed as hard and fast as ConfigMgr development has (I mean, holy McShit!). People still ask me, “Hey Dan!” (“It’s Dave”) “Right. Doug, why don’t you pick on Configuration Manager anymore? “

My answer is “Because, (A) my name is Dave and (B) there’s really nothing to pick on anymore.” That’s right. The CM team is knocking it out of the park. If you don’t believe me, come to MMS and find out for yourself. I did.

I’m sure the Intune team has equally lovely, hard-working people, lovingly hard at work to make the love work like it should. Or something like that. But I haven’t sat at a table with them, so they don’t know my face or what kind of car I drive either. So, for now at least, I can focus on them.

So, Now What?

Some of the limitations (missing capabilities) can be addressed using things like PowerShell script deployments and Win32 app deployments. But these are also limited. Let’s say you wanted to override the hamstrung inventory capabilities using a custom PowerShell script, or using an Azure Function or Automation Runbook.

Of these, the PowerShell script makes the most sense for two reasons:

  • Most likely you’re working with laptops, which roam around and aren’t accessible over the WAN 24/7, like desktops typically are.
  • Initiating the process from the remote client insures the greatest probability of success due to timing (when it’s on, and connected, it runs, instead of poking from afar repeatedly and trying again and again)

Pulling data from a device isn’t difficult, thanks to CIM/WMI and PowerShell. But you need to store the inventory (and other) query results somewhere if you want to make use of it. Excluding the use of a third-party tool, that leaves you with a two primary options:

  • PowerShell + Azure Storage Account (e.g. blob container)
  • PowerShell + Log Analytics

Both of these require some sort of credentials to access and import data. Whether that’s username/password, or a key, it has to be embedded in the script, which makes it vulnerable and risky.

Another issue is that deploying a PowerShell script from Intune only runs one time per device. Implementing a recurring/scheduled update requires either recreating the same script as a new assignment, or creating a scheduled task, and hope it doesn’t stop working for whatever reason. Either way, it’s reinventing a wheel that’s been around for DECADES. Sort of like “Here’s a shiny new car! Wheels are still in planning, but you can make your own

Summarizing my Conclusion

Once again, paraphrasing my colleagues and customers, and anyone who doesn’t run away as soon as I start talking to them, I would say MOST people trying to manage Windows devices would LOVE to do so from a nice and pretty web portal, and without a lot of on-prem infrastructure to mess with. Buy the new device, join it to a cloud party, and manage the open bar from the cloud.

In 2020, that vision is closer than it has ever been to becoming a reality. But the other reality is that not only are there still some serious technical challenges (rural connectivity, bandwidth, idiot users, crappy applications, bullshit drivers, more idiots, decaf coffee, JRE, McAfee, etc.) that remain persistent, but they aren’t going away anytime soon. If the economy doesn’t tank in the meantime, I think in the next five years this will be as commonplace as mobile phones are today. I hope so.

But then again, in 2020, I still have to fiddle with BIOS settings, and the ConfigMgr Query Rule dialog box may outlive today’s most powerful cockroaches. So, while I’m hopeful, I still don’t have a flying car, or that paperless office I was promised 20 years ago.

Finger’s crossed.

Namaste!

business, Devices, Scripting, System Center, Technology, windows

5 Things You Should Have Automated by Now

The Long Back-Story

(queue the campfire scene, under the stars, with distant harmonica and bearded old man, smoking a pipe of something, and all the little systems engineers, all gathered around to listen in their fuzzy pajamas)

For the last three decades, the roaming bean-counters of the world have quietly been building up a pressure-cooker of angst from all the walk-up status inquiries in the IT cube farms of the world. Each time they’d ask for a status update, they’d get a magical (mythical) answer. Specificity was lacking. Upper management was not happy. Vendors kept nodding in agreement, but were still focused on the product users, not the check-writers. That changed soon after the Cloud popped up.

I may blog about my thoughts on “The Future of the IT Worker”, if I have enough wine or beer to motivate me.

Short version: Shareholders buy stock in a company to make a profit on rising value (stock prices). Stock prices rise when the company increases profits. To increase profits, the company can only increase the gap between revenue and expenses. For 99.9% of businesses, IT is a “cost center”, or an expense. Shareholders DGAF* about imaging computers, change management reviews, or what your name is. They care about 2 things:

  • Increased profit margins
  • No bad press

Both of those points are impacted by expenses. Shareholders don’t like expenses. They bitch about expenses, a lot. They hire consultants to analyze expenses, and these days, one of the first areas they look is IT. Asking question like:

  • Why so many IT staff?
  • Why are you re-imaging every computer you buy, when they already work?
  • Why do you still have datacenters?
  • Can we move to a cheaper lease?
  • Training?! You don’t know this stuff already?

Seriously, the emphasis on “what value do you bring to the company?” is only going to get heavier and heavier.

So, in the interests of making yourself more valuable, I suggest bringing a little automation to your job. And, based on what most customers I know have already implemented, this is my 5-point list of gotta-have things:

[1] Active Directory User Account Processing

New hires. Temp staffing. Terminations. Name changes. Promotions and transfers. All of these tend to chip away at your precious time. Relying on a bundle of task-specific scripts is a good start: creating accounts, resetting passwords, adding/removing group members, and so on. But anything you have to stop and tend to with your own hands needs to be considered for automation.

Like all automation processes, it starts with the “authoritative source” of information. Usually HR. Whatever data they’re entering for a new hire, use that to drive everything else. Do not duplicate efforts by entering that information again somewhere else, as this not only wastes time, but adds risk of inconsistency.

If you don’t already have it, request access to whatever information you need to drive the entire process along. Make a list of all the user-related processes you deal with. Divide each process into distinct phases or tasks and work on them one at a time until you have the whole conveyor belt running.

Ideally, when HR says someone has been hired, your IT systems should immediately handle it. Changed departments? New surname? New job title? Done. Got fired for having sex on a forklift during work hours? Done.

Gaining experience with the HR systems and processes not only makes your job easier, it makes your role more valuable in the organization. Once the processes are automated, they will run more consistently and predictably, even if you go on vacation, and the organization will likely ask you for help automating other processes.

[2] Active Directory Computer Accounts Clean-Up

If you only have a dozen or so computers in your AD domain, you might get a pass here. But if you’re managing dozens, hundreds or thousands of computers, and you’re not running some sort of automated process to clean-out stale/unused accounts, you should be tasered in the crotch until the battery goes dead.

If you don’t already have something in place to automate this boring-ass chore, get moving. It’s really easy to implement a 3-step clean-up process:

  • Determine what criteria will be used to say a device account is stale
  • Identify and move stale accounts to an OU, and disable them
  • After X days, delete them

Once that process is tested, schedule it to run on its own.

There are hundreds of utilities and scripts available today to help automate this process, or you can build your own. Having a process in place means you can answer questions about asset inventory with a straight face, and calm down those bean-counters who freak out over the thought that things are out of control. “Relax, bean-counter person. I have it under control.

Icing on the cake: “I know we requested 1500 licenses of that software, but I confirmed we only need 1250. And with that $3000 I saved us, I’d like to attend MMS MOA this year, and buy a Hello Kitty flamethrower.

[3] Patch Management

The biggest problem I see today isn’t the patching itself, or the tools available to manage the patching. The biggest problem I still see is a lack of a process or procedure. If you’re still manually updating computers, especially endpoint devices (desktops, laptops, tablets, etc.), but even servers, pause here and do the following first:

  • Design a patching process: What, When, Where, and Who (owns each machine or system)
  • Give each group of machines or systems a name
  • Identify test machines within each group to validate monthly patches
  • Identify machines that can be patched at the same time, and which ones cannot.
  • Identify when machines can be rebooted

Having that mapped out will make it so much easier to pick and test the right solution (product or script).

After that, use your selected “test” machines for the initial pilot, and scale out from there. Start with the less critical machines and add the more critical machines later. That way you cover more machines early on, and work out the kinks before touching the high risk environments.

In the VAST majority of environments I’ve seen, the exception cases are the minority. So knocking out the machines with a consistent schedule also knocks out the biggest portion overall.

[4] Inventory Reporting

Fancy or basic, it doesn’t matter. The only thing that matters when the executives ask “how many ___ do we have?” is can you answer the question without lying your ass off. The other thing that matters, is when the BSA* comes to your door with a warrant, but that’s another story altogether.

How anyone can manage a computing environment without some sort of inventory reporting is beyond reason. That’s like expecting airlines to operate without flight plans.

Of all the examples listed on this post, this one is the oldest of them all. And since it’s been around the longest, there’s really no acceptable excuse to not have it automated by now.

If you don’t have a software product, or service, in use, get one. Many are free. If they don’t cut it, you can easily build your own with scripting and duct tape. Even if your devices are scattered across the globe, as long as they can touch the Internet, you can build something to make them squeal and give up their inventory data.

[5] Event Monitoring

Imagine if your car didn’t have a dashboard. Or your smartphone didn’t have a battery indicator. That’s pretty much the same thing when you manage computers without some sort of event and/or log monitoring. The data is being tracked somewhere, but unless you have a clear view of it somewhere, you’ll never know. Until it all goes sideways, and then you’re scrambling to find out where to look “under the hood” as the house is burning down.

Of all the support cases I ran into between 2015-2019, which related to some sort of “oh shit, our shit is broke! please help fix!“, most of the root causes fell into one of the following buckets:

  • Ran out of disk space
  • Service account was locked
  • Service failed to start
  • Configuration change impacted other processes
  • Network connectivity failure
  • Anti-virus was blocking a critical process

Every single one of these could have been avoided with the following simple tools:

  • A monitor to report potential problems
  • An automated process to remediate each of the potential problems before they get worse

Flying blind is no way to run a datacenter, let alone a bunch of computers. Whether you prefer to buy a solution, or build it yourself, just get something in place. In every instance where this was done, the number of “oh shit!” events dropped significantly.

Maybe you like getting a panicked call from a manager on the weekends, at 3am on a weekday, or while you’re on vacation. That’s not my idea of a happy life. And applying some basic automation to monitoring is not only one of the easiest types of automation, it’s often a good on-ramp to scaling your efforts into other areas that drain your time every day.

Cheers!

Cloud, databases, Projects, Scripting, System Center, Technology, windows

Brain Dump(ster fire)- POC Azure Automation, MEM/CM, On-Prem SQL, and Email

(I put out a Twitter poll asking if I should post a “nerdy” topic, or a “stupid AF” topic first, and nerdy was in the lead, so grab your caffeine. You’re going to need it)

UPDATE 1 – I forgot to mention a few things:

  • You will need to install the PowerShell module dbatools on the Hybrid Worker machine.
  • You don’t need to install the Hybrid Worker on CM01, but any machine in your lab environment, as long as the MECM machine and SQL database are accessible to it.
  • You don’t technically need PowerShell module ImportExcel on the Hybrid Worker. If it’s installed, the script will export directly to a Excel worksheet (whether Excel is installed or not). If not, it exports to CSV format.

When I was a little boy, okay, littler boy, in the 1970’s, my favorite things in life were: G.I. Joe, bicycles, and Lego kits. Holy shit! 1970’s? Phone booths. Pre-cable TV. Wall-mounted phones. Gas was $1.35/gallon. And disco was the douche king of fashion nausea. Computers? Those were still science fiction stuff to us.

Anyhow, when Christmas morning came around, and I saw any one of those “magic 3”, you could’ve set fire to the entire world and I’d never noticed it. My focus was entirely on them. When it came to Lego kits, we had two (2) kinds to choose from: project kits and master kits…

Project kits were like buying a bicycle at Walmart. Master kits were like buying a Home Depot. Hopefully you get the idea. My favorite were the master kits, which I would just dump into a bigger pile with last-year’s kits and make some strange, but cool new thing. Frankenkits, I called them. I built cities with mono-rails, and spaceports, and ran the Christmas tree lights throughout. Half of the living room and then, well, someone would inevitably step on a brick with their bare feet, and alas, the Lego city was no more.

With software, one of my favorite things to work on are proof-of-concept projects. They’re like mini-projects, and mostly for myself, but are also at times part of a customer engagement, vetting a “could we do this?” question. Sometimes just to see if “idea 1” or “idea 2” was easier or better, or provided something over the other.

This post is going to be a simple, proof-of-concept, for setting up Azure Automation, with a Hybrid Worker, to invoke data exports from the SQL Server database underpinning a Microsoft Endpoint Configuration Manager (MEMCM) primary site, and sending the data via Email attachment.

A Little Background

It’s going to take years to shake the habit of referring to “System Center Configuration Manager” as MECM or MEMCM. The terms “Config-Manager”, “SCCM” and so on, are so ingrained in our minds that they’ll be tossed around long after I’m gone, I’m sure of it.

The test configuration I used for this is as follows:

  • Windows Hyper-V host
  • DC01
    • 1 gb memory
    • 2 processors
    • [C:] 80 gb disk (differencing on Windows Server 2019)
  • CM01
    • 32 gb memory
    • 2 processors
    • [C:] 80 gb disk (differencing on Windows Server 2019)
    • [D:] 300 gb disk “Apps” (dynamic)
    • [E:] 300 gb disk “Data” (dynamic)
  • FS01
    • 8 gb memory
    • 2 processors
    • [C:] 80 gb disk (differencing on Windows Server 2019)
    • [D:] 500 gb disk “Data” (dynamic)
  • GW01
    • my router VM running pfSense 2.4.4
    • 512 mb memory
    • 1 processor
    • 50 gb disk (dynamic)
    • 2 NICs: WAN and LAN with LAN on 192.168.2.1 (internal Hyper-V switch)

This setup is only for testing purposes, not production. So allocated resources for VM guests are minimal.

In addition to the basic setup, the MECM site has to be operational (i.e. it has managed clients, inventory data, and has data flowing through its SQL veins). Otherwise, there’s nothing to query and nothing to report, so game over. If your site isn’t yet operational, pause here, get that resolved, and come back. I’ll wait.. .. . . …. . . . . .. . . . . .

Disclaimers:

  • I am NOT an “expert” with Azure or Azure Automation. I may be using a bad approach somewhere along the way, but it’s just a proof-of-concept.
  • The “cloud” is a moving target. Anything I post may likely change within a month or year, so this is all based on November 2019.
  • DO NOT TEST any of these in production environments, Azure or on-prem, or I will disown you and plug my ears, while saying “la la la la la la laaaaa la la la” until you leave.
  • This blabbering makes the assumption that you have an Azure subscription, with permissions to create Automation Accounts, and drain your bank account when you run crazy tasks and forget to check back that they aren’t spinning out of control somewhere.
  • None of this is about “should you“, but simply intended to demonstrate ONE possible option. You may have a much better alternative, and that’s fine. Love, peace and good vibes to all. Namaste.

Now, before my grass-fed non-GMO brain gets cold…

Part 1 – Azure Automation Setup

Perform the following in the Azure portal. Technically, the SendGrid part isn’t part of the Automation Account, but it’s going to be needed for sending emails later on.

  1. Create an Azure Automation account
    1. Search “Automation Accounts” and click to open
    1. Click “Add”
    2. Name: “aa-hybrid1
    3. Subscription: <select yours>
    4. Resource Group: Create New / “rg-hybrid1
    5. Location: <select the closest to you>
    6. Create Azure Run As Account: <yes>
    7. Create
  2. Configure a SendGrid account
    1. Search “SendGrid Accounts” and click to open
    1. Click “Add”
    2. Name: “mailsender
    3. Password: <make it diabolically complex>
    4. Confirm: <again>
    5. Subscription: <select yours>
    6. Resource Group: <select “rg-hybrid1” >
    7. Pricing: > choose “F1 Free“, review details, click Select
    8. Contact Information: > fill-out information, click OK
    9. Legal terms: > Review, click Create
    10. Click on the SendGrid account, then click Configurations. Copy the Username to Notepad for use later on.

Part 2 – Create Runbook Assets

Perform the following in the Azure portal. Runbook Assets are resources which support Runbooks, such as Variables, Credentials, Connections, Credentials, and so on.

Variables

  1. Navigate to Automation Accounts and select “aa-hybrid1
  2. Scroll down and select “Variables“, click Add a Variable for each below:
    1. Name = “CM-SQL-HostName” Value = <FQDN of on-prem SQL host>
    2. Name = “CM-SQL-Database” / Value = <“CM_P01” or whatever>
    3. Name = “MailSender” / Value = <make up a sender address, it doesn’t have to actually exist>
    4. Name = “MailRecipients” / Value = <valid addresses to send email to>

Credentials

Scroll and select “Credentials“, click Add a Credential:

  1. Name = “SendGridAccount
    1. Copy the User name from SendGrid account properties, and enter the same password you used when setting it up, confirm the password again, and click Create
  2. Name = “Contoso_CMINstall
    1. Enter “contoso\cm-install” (or whatever your on-prem account happens to be) and enter the password, confirm the password again, and click Create

Part 3 – Create a Runbook

Perform the following in the Azure portal.

  1. Within the aa-hybrid1 Automation Account section (okay, “blade”) scroll down and click on “Runbooks”
  2. Click “Create a runbook
    1. Name = “Export-CM-Devices
    2. Type = “PowerShell
    3. Description = “This is my runbook. There are many like it, but this one is mine.”
    4. Click Create
  3. When the editor panel is ready, copy and paste the code from this link, then click Save (you can trust it, I think it works most of the time – lol)

At this point, you should have the following items created and configured. However, you still need to setup a Hybrid Worker in your on-prem environment, so you can run this Runbook “on” the on-prem server. Which we’ll do in part 4.

  • Automation Account (with Resource Group)
  • SendGrid Account
  • Runbook Assets (Variables, Credentials, Run As Account)
  • Runbook

Part 4 – Install a Hybrid Worker

Perform the following on your intended on-prem machine (virtual machine hopefully). For this demo use machine “CM01”, which is your MECM/ConfigMgr primary site server with SQL Server. We need to install a PowerShell script and edit some values before running it to create the new Hybrid Worker (HW) and Hybrid Worker Group (HWG)

A HWG is a logical name for one or more HW machines. When you initiate a Runbook for on-prem execution, it is targeted to the HWG, which in-turn hands it off to one of the HW’s on-prem. For this part we will create both an HW and HWG in one step.

  1. Open a PowerShell console using “Run as Administrator
  2. Run Set-ExecutionPolicy ByPass
  3. Run Install-Script -Name New-OnPremiseHybridWorker (answer “Y” to any confirmations. This script is published by Microsoft, a trusted name in quality software for over 500 years)
  4. Run $cred = Get-Credential (enter your Azure account which has permissions to create Runbooks, etc.)
  5. Type the following and press Enter. Remember to replace the “<…>” items with actual values for your Azure environment. Be patient after you press Enter, as it needs to download and install a few PowerShell modules, install some components, and entertain you with progress bars along the way.
$rg  = "<name of resource group>"
$aa  = "<automation account name>"
$sid = "<subscription id>"
$wg  = "mecmLab"
$cred = Get-Credential
New-OnPremiseHybridWorker.ps1 -AAResourceGroupName $rg `
    -SubscriptionID $sid -AutomationAccountName $aa `
    -HybridGroupName $wg -Verbose

Note: If you get tired of being prompted about an “Untrusted repository” when referencing PSGallery, type in Set-PSRepository -Name PSGallery -InstallationPolicy Trusted, and press Enter.

After the script processing has completed, review the output to confirm “Registration Complete” near the bottom, and you see the HWG name and machine name as well.

Then, go back to the Azure portal to complete the following:

  1. Go to Automation Accounts / aa-hybrid1
  2. Click “Hybrid worker groups” (down below Runbooks on the menu)
  3. Verify you see the new HWG (e.g. “mecmlab”)
  4. Click on the new HWG (e.g. “mecmlab”) to show more details
  5. Click on “Hybrid Workers
  6. Verify you see your on-prem machine in the list (e.g. “CM01”)

Back on your on-prem machine (CM01) open Apps and Features (or Installed Applications list), and verify you see “Microsoft Monitoring Agent” installed and note the installation date.

You should now have everything configured and ready to rock-n-roll for running Azure Runbooks on-prem! Drink up! You might need it for the Normandy beach landing you’re about to enjoy. Just kidding.

Part 5 – Test your Hybrid Worker

For this step, I recommend a very basic/simple PowerShell Runbook to just display the computer name. That way you can see if it’s really executing on the (on-prem) Hybrid Worker, from within the Azure portal.

  1. Create a New Runbook
    1. Name = “Show-ComputerName
    2. Type = PowerShell
    3. Description = (blank)
    4. Click Create
    5. In the editor panel, type in the following code snippet:
      Write-Output "computer name: $env:COMPUTERNAME"
    6. Click Save
    7. Click “Test Pane
    8. Change “Run Settings” option to “Hybrid Worker
    9. Select the new HWG from the listbox (e.g. “mecmlab”)
    10. Click Start
    11. Wait for the output panel to return the results. You should see “Completed” at the top, and “computer name: CM01” below it.
    12. Up top, in the breadcrumb menu list, click on “Edit PowerShell Runbook“. If you are prompted about “Your unsaved edits will be discarded” click OK (they’re fine)
    13. Click Publish, and then click Yes

You have now reached “World Domination Level”, collected all of the treasures, and have crushed all your enemies. Grab another can of Red Bull and proceed quickly to level 6.

Part 6 – Run the Data Export Runbook

If you don’t already have a Runbook named “Export-CM-Devices”, go back to Part 3 and come back to this part when you’re ready. Perform the following in the Azure console, but keep an eye on your HW machine as well.

I recommend opening Task Manager within the HW machine and click the Details pane, so you can see what’s going on and what sort of latency you see between “Queued” and “Running” (in Azure) and what you see happen in the on-prem machine.

  1. Navigate to Automation Accounts / Runbooks
  2. Select the Runbook: Export-CM-Devices
  3. Note: If the current Runbook status is still “In Edit”, click “Edit” and make sure you didn’t screw the code up while drinking your Red Bull with Bourbon last night. If it looks good, click Publish and click Yes.
  4. Click Start
  5. Switch Run Settings to Hybrid Worker, and select your Hybrid Worker Group from the listbox (e.g. “mecmlab”)
  6. Click OK
  7. Wait for the status to show “Running”, while watching Task Manager on the on-prem machine (e.g. CM01). You should see a new powershell.exe process pop up, note the time it takes from when Azure changes status from “Queued” to “Running” and when you see the powershell.exe process launch. Then note when it closes and when Azure shows “Completed” (or hopefully not “Failed”).
  8. Click the Output tab and wait for output
  9. If all goes well, you should see Status “Completed” and a message from output below showing “Sending an email to <whoever> :: subject = CM Inventory Report”
  10. Check your Inbox for the email and attachment. Note that due to how you configured the “MailSender” address, the message may land in your Junk/Spam folder.
  11. Open the attachment, gaze at the data, and consume your beverages now.

Ding! Fries are done! 🙂

Thoughts? Comments? Bugs? Post a comment below…

Happy Thanksgiving!

conferences, System Center, Technology

MMS Jazz Edition, 2019

NOTE: For a better, and more-thorough break-out of sessions and topics, please check out Bryan Dam’s blog post. I may repeat some things here, but that’s only because I’m mentally challenged, and had way too much coffee today.

UPDATE 1911.21 – Capn’s log, brain beamed down to surface of planet Blog, once again without coffee. Still no response. Scanners revealed another blog goof, reported by leaders of planet Adam Gross. Phasers are set to “caffeine”, and aimed at notes again, to fix presenter names and add missing items. Kirk out.

UPDATE 1911.20 – I knew my brain wasn’t firing on all 3 cylinders today. I accidentally mixed a few sessions into my “notes” section that I didn’t actually attend. I got my “notes” summary lists mixed up. So the ones I wasn’t able to sit in I’ve marked with “[notes only]”. My apologies for the mix-up. I also fixed the missing link to Damien’s twitter profile.

Where do I begin? I’m still sorting this out in my tiny brain. In short, MMS is a fire hose of information and networking. I’m terrible with names until I’m around someone for a few days, and even then, beer can make that a little tougher for me. My kids are 21 to 30 years old now, and I still get their names wrong.

In short, this event was…

  • My first trip to New Orleans
  • My first time at (any) MMS
  • My first time speaking in front of more than 20 people (since 1998)
  • My first time meeting so many people I follow online, read their books, blogs, videos, and tweets
  • My first White Russian slushy drink since 1984, and probably my last

MMS vs. Ignite?

This question comes up A LOT from customers I work with. I’ve been to a few TechEd and Ignite conferences (my last was 2017), but only one MMS. From what I experienced, and learned from others:

  • Pick a conference based on what you want to get out of it (strategic direction vs. technical deep-dive).
  • If your goal is to gain insights into high-level strategic direction of a vendor, go to a conference like Ignite.
  • If your goal is to gain insights into the technical aspects of pursuing a direction, go to a conference like MMS.
  • If your goal is having a deeper conversation with speakers, vendors, peers, and networking, go to MMS.
  • If your goal is just to get drunk and pass out, go to Bourbon Street.

More than that, and this is really what sets MMS apart: the scale of the event. (IMHO) Ignite is nice, but getting a few minutes to chat directly with a speaker can be a real challenge. In most cases, they, or you, will be in a hurry to get to the next session, sessions are often packed, AND there will likely be 50 other attendees who want to have a conversation with the speaker. It can leave attendees feeling lost in the crowd.

MMS is much smaller, by contrast, and that’s intended. The goal is to insure a greater opportunity to have a conversation with speakers, as well as other attendees. In the words of several people I heard: “It really feels like a big family reunion“. Just my first time there and it felt like that. Actually it’s much better than a dysfunctional family.

I’ve said this already, but still, kudos to Brian Mason, Greg Ramsey, and all the other people working behind the scenes to plan, prepare, and carry this event through to the end. It was smooth and effortless from the outside, but I’m sure it was more effort under the hood than most of us realize.

New Orleans

That was my first time in New Orleans, and I wandered mostly around the French Quarter, Jackson Square, Woldenberg Park, the River walk, and of course, Bourbon Street. The historic scenery, architecture, and views were amazing (I’m a solo photo-walk nerd). All of the restaurants and eateries I tried were fantastic, even the hotel lounge and restaurant serve amazing food and drinks. And the drinks… the drinks, omfg. I don’t condone alcohol for anyone else, but the drinks were pretty good. Anyhow, if you don’t drink, the food alone is worth the trip.

People and Sessions

As I already mentioned, just being there is enough to make me happy. But getting to present was icing on the cake. And then getting to meet so many people for the first time, especially people who I only “knew” from Twitter and blogging, was like the crack sprinkles on top of the icing on top of the cake.

I can honestly say that I didn’t attend any bad sessions. Not even remotely uninteresting. The topics were extremely relevant, and the content was very well honed. The range of technical depth went from “refresher” to “ah, now I see“, to “Oh shit! Where’s my jaw?! My brain exploded!“.

Due to the relatively compact nature of the conference, I found myself wanting to be in 5 rooms at the same time, but my cloning powers failed me. Even while doing my sessions, I wanted to be in the next room watching someone else’s presentation, and that’s not a joke.

Topics and Highlights

Some of the notes I took (and it was really hard at times to keep up, so I recommend bringing a laptop or tablet to the sessions, because, even though the slide decks are provided afterwards, they don’t always include the golden nuggets from discussions during the presentations)

  • ConfigMgr –> MEMCM (David James, Dune Desormeaux, Chris Sires, and Team / Microsoft)
    • Dual-licensing for Intune!
    • Admin Service in the cloud FTW
  • OSDBuilder, OSDSUS (David Segura, Donna Ryan)
    • Separated modules, new parameters
    • New “Content Packs”
    • WIM Witch – GUI alternative, closely-linked to OSDSUS
  • Delivery Optimization (Andreas Hammarskjold and Phil Wilcock)
    • DOINC, Microsoft Connected Cache (MCC)
    • PowerShell capabilities
    • Watch out with wireless networks!
    • O365 ProPlus MCC distribution coming very soon
    • BranchCache/PeerCache going away (someday)
  • CM Admin Service (Adam Gross and Kim Oppalfens) [notes only]
    • Intune talking to on-prem CM clients
    • Custom tasks / workflows
    • How console talks to SMS Provider (1910+)
    • Damn it – I wanted to be in this one too
  • CM Common Mistakes (part-1 Adam Gross and Doug Wilson)
    • Collections, Queries, Nesting, Update Schedules
    • Console folders and naming conventions
    • AD discovery, OU filters, Logs
    • Ignoring logs (console, servers and clients)
    • CmTrace and OneTrace
    • Software Updates
    • Client Health
  • CM / MEMCM Security/Defense (Kim Oppalfens and Tom Degreef) [notes only]
    • Anti-Virus exclusions (missing or too broad)
    • Common attack vectors: code-signing, firewalls, privileged accounts, shared accounts
    • Damn it – I wanted to be in this one too
  • BIOS Firmware Updates (Maurice Daly and Jorgen Nilsson) [notes only]
  • PowerShell Performance and Security (Nathan Ziehnert and Scott Corio)
    • Optimize for performance
    • Signing, controlled access
    • .NET vs. PowerShell methods and patterns
    • Don’t assume. Verify (measure-command, new-timespan)
  • SQL Queries / Updates Compliance Reporting (Bryan Dam and Garth Jones)
    • Use vRBAC views when possible (often faster)
    • Use v_CombinedDeviceResources vs. vWorkstationStatus (CNIsOnline, etc.)
    • Be careful to match source views when creating drill-down reports (parent->child)
  • Avoiding Common Automation Mistakes (Cameron and Donnie)
    • Picking the right automation tools
    • Prototyping vs. Production (Flow/PowerApps to Functions)

And these are just SOME of the notes from only the sessions I was lucky enough to attend. If I had a “short-list” from all of the sessions, it would easily be 10 times as long. I didn’t get a chance to attend any of the daily Jam Sessions, which everyone said were fantastic.

Jam Sessions are 1-on-1 conversations between vendors and attendees to discuss product issues, features, enhancements, and use-cases. Pretty much vendors learning about customer needs, and customers learning what’s possible, and what might need to change.

Basically: No matter what session you attend, you will learn something. AND, you will get a great opportunity to ask direct questions and get direct answers.

As Cameron says: “You should come with questions, and leave with answers

About them Sponsors

Another thing about MMS that I found different is how attendees perceive and interact with the sponsors. At bigger conferences, the common view is that they are “freebie stations”. Get your badge scanned, get a free toy or sticker, and hope you win something cool.

At MMS, the list of sponsors was scaled to match the number of attendees, and that works fantastically. There were only a handful of booths/demo stations, and the vendor reps were also some of the speakers. You could sit with them and have an open conversation about almost anything. The dialogue aspect benefits both the attendee and the vendor, as they each learn about the other, and what changes are coming down the line.

The sponsors this year included PatchMyPC, 2-Pint, 1E, Adaptiva, Parallels, and Recast Software. As with other MMS conferences, Microsoft had a strong showing, with members of endpoint-related teams such as ConfigMgr/MEMCM, Intune, and Docs (quite possibly others I overlooked). I also got a cool button from Aaron with a slogan that I forgot I tweeted! (I’ve been known to do things like that)…

Today I submitted four (4) proposals for MMS MOA 2020, so I have all my fingers and toes crossed for good luck. If even one (1) is accepted, I’ll be beyond happy.

I hope to see you there!

Devices, Projects, Scripting, System Center, Technology, windows

More Paint Fumes with Automated Device Naming using ConfigMgrWebService and PowerShell

Update-3: Added example for using the GetCMFirstAvailableNameSequence() function.

Update-2: fixed spelling of Nickolaj. Changed coffee brand. Moved lawn rake.

Updated-1 11/6: Corrected a few bone-headed mistakes. (1) ConfigMgrWebService author is Nickolaj Andersen (sorry about that), but I still need to let Maurice finish his dinner. (2) There is a function provided for querying the next available incremental name in a numbered sequence, from CM not AD, GetCMFirstAvailableNameSequence. So at least some of this post still holds up. Geez, this coffee just isn’t strong enough anymore.

As if there aren’t enough types of paint to inhale already, much like the wide variety of methods for naming devices, here’s one that was interesting and fun to automate. This is for environments where devices need to be named in an numerical incremental fashion.

Background

  • Devices use a standard prefix based on form-factor (e.g. Chassis Type) such as “WS”, and “LT”.
  • Unique portion is appended as an incremental value (e.g. 001, 002, …)
  • Examples: “WS001”, “LT003”

Challenge

  • How to determine the next available name in the Active Directory domain, during an OSD task sequence, provided only with the prefix portion such as “WS”.
  • Solution should “fill-in” missing gaps in names found in AD, such as 001, 002, 004 (should return 003 as next available).

Approach

  • Obtain device characteristics from WMI using PowerShell
  • Obtain the next available account in AD by querying the ConfigMgrWebService using PowerShell.

I’ve seen quite a few ways to address this, from importing modules and using ADSI queries, etc., all of which involve PowerShell (and why not?!).

I’ve spent the better part of the last year helping customer move to “Modern Driver Management” using SCConfigMgr’s solution set (click here for more details), so I remembered ConfigMgrWebService also provides more query functions besides GetCMDrivePackageByModel.

The GetADComputer function looks closest to what I wanted, but it only returns one result per query (ask for a computer, get back one computer). No option that I could see for getting a list, much less a filtered list, of computer names in AD. I could have just bothered Maurice for like the 42nd time, but the poor guy needs to enjoy dinner in peace too. And there’s a very easy way to leverage that function as-is.

The example (below) takes the web service URI, the web service “secret key”, and naming pattern parameters, and loops through AD starting with a base index name of 1 (or 001). I set an arbitrary limit of 100 instances per prefix, but this can all be modified to whatever is needed.

[CmdletBinding()]
param (
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $URI,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $SecretKey,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $Prefix,
  [parameter()][ValidateRange(3,15)][int] $NameLength = 4
)
try {
  Write-Verbose "connecting to web service at $URI"
  $ws = New-WebServiceProxy -Uri $URI -ErrorAction 'stop'
  for ($index = 1; $index -lt 100; $index++) {
    $nextname = $Prefix + $([string]$index).PadLeft($NameLength - $($Prefix.Length), "0")
    Write-Verbose "checking name: $nextname"
    $found = ($ws.GetADComputer($SecretKey, $nextname)).SamAccountName
    if (![string]::IsNullOrEmpty($found)) {
      Write-Verbose "name exists: $nextname"
    }
    else {
      return $nextname
    }
  }
  Write-Output "no names for this prefix available from 1 to 100"
}
catch {
  Write-Error $_.Exception.Message 
}

The -NameLength parameter controls the overall length of the names to check and return value. Again, I set an arbitrary range of 3 to 15 characters (example: “WS1” to “WS0000000000001”). Here’s the results when I run this against my AD domain which already has devices WS01, WS02, WS03, WS09 and WS10…

It’s pretty quick. In my tests it runs about 20 to 30 names per second.

(updated) Using the built-in GetCMFirstAvailableNameSequence() function, it would be like the following…

try {
  Write-Verbose "connecting to web service at $URI"
  $ws = New-WebServiceProxy -Uri $URI -ErrorAction 'stop'
  $nextSuffix = $ws.GetCMFirstAvailableNameSequence($SecretKey, $NameLength, $Prefix)
  $nextname = "$Prefix$nextSuffix"
  Write-Verbose "checking name: $nextname"
  return $nextname
}
catch {
  Write-Error $_.Exception.Message 
}

To directly assign the output to a task sequence variable, like OSDComputerName, just wrap the code in a function block (i.e. Get-NextADDeviceName), and append a few more lines to assign the output…

$newname = Get-NextADDeviceName
try {
  $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
  $tsenv.Value("OSDComputerName") = $newname
}
catch {
  Write-Warning "not running in a task sequence at the moment"
}

Putting all this together it looks like the following, but I’m sure you will see it can be improved a great deal…

[CmdletBinding()]
param (
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $URI,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $SecretKey,
  [parameter(Mandatory)][ValidateNotNullOrEmpty()][string] $Prefix,
  [parameter()][ValidateRange(3,15)][int] $NameLength = 4
)
fuction Get-NextADDeviceName {
  try {
    Write-Verbose "connecting to web service at $URI"
    $ws = New-WebServiceProxy -Uri $URI -ErrorAction 'stop'
    for ($index = 1; $index -lt 100; $index++) {
      $nextname = $Prefix + $([string]$index).PadLeft($NameLength - $($Prefix.Length), "0")
      Write-Verbose "checking name: $nextname"
      $found = ($ws.GetADComputer($SecretKey, $nextname)).SamAccountName
      if (![string]::IsNullOrEmpty($found)) {
        Write-Verbose "name exists: $nextname"
      }
      else {
        return $nextname
      }
    }
    Write-Output "no names for this prefix available from 1 to 100"
  }
  catch {
    Write-Error $_.Exception.Message 
  }
}
# call the function and get the next available name
$newname = Get-NextADDeviceName
Write-Verbose $newname
try {
  $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
  $tsenv.Value("OSDComputerName") = $newname
}
catch {
  Write-Warning "not running in a task sequence at the moment"
}

From here, it’s simply a matter of adding it to the task sequence, using Run PowerShell Script. If you need an example for querying the chassis type, check out my earlier post on computer naming.

Cheers!

databases, Scripting, System Center, Technology

Let’s Take a PowerShell Dump with ConfigMgr!

Let’s pull some inventory summary data out of Configuration Manager, just like a TSA agent with Latex gloves on! We’ll be using the following goodies:

  • PowerShell
  • PowerShell Module: dbatools
  • PowerShell Module: importexcel
  • A JSON file
  • A Configuration Manager site with a SQL Server database
  • A user account which can access the database and successfully run queries, has access to install PowerShell modules, and has plenty of coffee and bad jokes.

Introduction

This little project demonstrates one example of exporting information from a Configuration Manager SQL Server database, directly to Microsoft Excel, without even having Microsoft Excel installed. You will need Microsoft Excel eventually, in order to view the results, but that could be done years later, and by then, I’ll be too old to remember any of this.

For this demonstration I used the following environment:

  • Configuration Manager 1906 (or later)
  • SQL Server 2016 (or later)
  • Windows Server 2016 (or Windows 10, 1803 or later)
  • Windows PowerShell 5.1

Staring into the Abyss

It starts with a JSON file, which contains the parameters that include the dataset names, SQL queries, and specific output values. You could store this information in almost any desired format, I chose JSON because it’s fun to say out loud when you work next to a guy named Jason, and because my colleague Ryan nags the shit out of me to stop using XML. There. Are you happy now? 🙂

Here’s a snippet of the config.json file. If it doesn’t display correctly, I’m blaming Ryan, and WordPress. But mostly Ryan. I don’t know why.

{
    "Summary": {
        "query": "SELECT ( SELECT COUNT(*) FROM v_R_SYSTEM ) AS Devices,( SELECT COUNT(*) FROM v_R_USER ) AS Users",
        "properties": "Devices,Users"
    },
    "OperatingSystems": {
        "query": "select distinct Caption0 as OSName,BuildNumber0 as BuildNum, case when (BuildNumber0 = 18363) then '1909' when (BuildNumber0 = 18362) then '1903' when (BuildNumber0 = 17763) then '1809' when (BuildNumber0 = 17134) then '1803' when (BuildNumber0 = 16299) then '1709' when (BuildNumber0 = 15063) then '1703' when (BuildNumber0 = 14393) then '1607' when (BuildNumber0 = 10586) then '1511' when (BuildNumber0 = 10240) then '1507' else '' end as Build, CSDVersion0 as SvcPack, COUNT(*) as Devices from v_GS_OPERATING_SYSTEM group by Caption0,BuildNumber0,CSDVersion0 order by Caption0,BuildNumber0",
        "properties": "OSName,BuildNum,Build,SvcPack,Devices"
    },
    "Models": {
        "query": "select distinct Manufacturer0 as Manufacturer, Model0 as Model, Count(*) as Devices from dbo.v_GS_COMPUTER_SYSTEM group by Manufacturer0,Model0 order by Manufacturer0,Model0",
        "properties": "Manufacturer,Model,Devices"
    },
    "Disks": {
        "query": "select distinct ld.SystemName0 as Name, ld.Size0 as Capacity, ld.FreeSpace0 as FreeSpace, cs.Model0 as Model, case when (FreeSpace0 < 20000) then 'No' else 'Yes' end as Ready from v_GS_LOGICAL_DISK ld inner join v_GS_COMPUTER_SYSTEM cs on cs.ResourceID = ld.ResourceID where ld.DeviceID0 = 'C:' order by ld.SystemName0",
        "properties": "Name,Model,Capacity,FreeSpace,Ready"
    },
    "Memory": {
        "query": "select sys.Name0 as Name,pm.ResourceID,pm.Capacity0 as Memory from v_GS_PHYSICAL_MEMORY pm inner join v_R_SYSTEM sys on pm.ResourceID = sys.ResourceID group by sys.Name0,pm.ResourceID,pm.Capacity0 order by sys.Name0",
        "properties": "Name,ResourceID,Memory"
    },
    "Software": {
        "query": "select distinct ARPDisplayName0 as ProductName,ProductVersion0 as Version,Publisher0 as Publisher, ProductCode0 as ProductCode, COUNT(*) as Installs from v_GS_INSTALLED_SOFTWARE_CATEGORIZED where (LTRIM(ARPDisplayname0) <> '') and (SUBSTRING(ARPDisplayName0,0,2) <> '..') group by ARPDisplayName0,ProductVersion0,Publisher0,ProductCode0 order by ARPDisplayName0,ProductVersion0",
        "properties": "Installs,ProductName,Version,Publisher,ProductCode"
    },
    "ADSites": {
        "query": "select distinct sys.AD_Site_Name0 as ADSite, COUNT(*) as Devices from v_R_SYSTEM as sys group by AD_Site_Name0 order by AD_Site_Name0",
        "properties": "ADSite,Devices"
    },
    "Gateways": {
        "query": "select distinct DefaultIPGateway0 as Gateway, COUNT(*) as Devices from v_GS_NETWORK_ADAPTER_CONFIGURATION where DefaultIPGateway0 IS NOT NULL group by DefaultIPGateway0 order by Devices desc",
        "properties": "Gateway,Devices"    
    },
    "DistPoints": {
        "query": "select Servername,SMSSiteCode as Site,Description,Type,IsPXE as PXE,IsDoincEnabled as DOINC, IsBITS as BITS,IsMulticast as MCast,IsPullDP as PullDP,IsPeerDP as PeerDP, SslState as SSL,PreStagingAllowed as Prestage from v_DistributionPoints order by ServerName",
        "properties": "Servername,Site,Description,Type,PXE,DOINC,BITS,MCast,PullDP,PeerDP,SSL,Prestage"
    }
}

The script (below) imports the JSON data as a hashtable-like structure (it’s actually a custom object type, but whatever, it’s JSON so it has to be good). Then we connect to the database instance, punch it in the face, and steal it’s data. By the time it wakes up, we’re in Vegas baby.

#requires -modules dbatools,importexcel
[CmdletBinding()]
param(
  [parameter(Mandatory)][ValidateLength(3,3)][string] $SiteCode,
  [parameter()][ValidateNotNullOrEmpty()][string] $DbHost = "localhost",
  [parameter()][ValidateNotNullOrEmpty()][string] $DbName = "CM_$SiteCode",
  [parameter()][ValidateNotNullOrEmpty()][string] $ReportPath = "$env:USERPROFILE\documents",
  [parameter()][ValidateNotNullOrEmpty()][string] $ConfigFile = ".\config.json"
)
$ErrorActionPreference = 'stop'
$XlFile = $(Join-Path $ReportPath "$SiteCode`_Inventory.xlsx")

if (Test-Path $XlFile) { Remove-Item -Path $XlFile -Force }

if (-not(Test-Path $ConfigFile)) {
  Write-Warning "configuration file not found: $ConfigFile"
  break
}
try {
  $cfg = Get-Content $ConfigFile | ConvertFrom-Json
  $keys = ($cfg.psobject.Properties).Name
  foreach ($key in $keys) {
    $qset  = $cfg."$key"
    $qtext = $qset.query
    $props = $qset.properties -split ','
    Invoke-DataExport -Query $qtext -ReportName $key -Properties $props
  }
  Write-Host "processing complete" -ForegroundColor Green
}
catch {
  Write-Error $_.Exception.Message 
}

There’s also a nested function (in the same .ps1 file) which does the dirty work of disposing the bodies and running the query truck:

function Invoke-DataExport {
  [CmdletBinding()]
  param (
    [parameter(Position=0)][ValidateNotNullOrEmpty()][string] $Query,
    [parameter(Position=1)][ValidateNotNullOrEmpty()][string] $ReportName,
    [parameter()][string[]] $Properties
  )
  try {
    Write-Host "exporting: $ReportName" -ForegroundColor Cyan
    Invoke-DbaQuery -SqlInstance $DbHost -Database $DbName -Query $Query | Select-Object $Properties | Export-Excel -Path $XlFile -WorksheetName $ReportName
  }
  catch {
    Write-Error $_.Exception.Message
  }
}

Don’t worry, there’s link to download this roadkill mess into a plastic bag for safe consumption, near the end of this rambling mess.

To use this code, you’ll need to install two (2) PowerShell modules first:

Install-Module dbatools
Install-Module ImportExcel

Let’s walk through this bucket of puke that I might call “code”:

  • First, we check to see if the -ConfigFile parameter points to a real file object, or if the user was high on meth. Then we proceed.
  • Then we look to see if there’s an existing output file, and it there is, we nuke it with an Ebola Nuclear Laser bomb, and move on.
  • Next, we import the configuration data from the JSON file
  • Then, we fetch the main group names as $keys, which are kind of like getting the column names from a table, or something like that. Actually, not at all, but it sounded good at the time.
  • Then, we iterate (loop) through each key and fetch it’s associated sub-properties: “query” and “properties”. The “query” data is the actual SQL select statement. The “properties” are the names of the output columns which I want to export, and the order in which they should be exported (from left to right).
  • Then, submit the key name, query and properties to a the Invoke-DataExport function.
  • The Invoke-DataExport function sends the query to the SQL instance, and passes the results (dataset) over the pipeline to the Select-Object step which filters only the properties it’s told, then sends it to Export-Excel to stuff into the spreadsheet file.
  • Finally, we pour some Write-Host sauce on it, and microwave for about 3 seconds and serve.

Sick of reading this amateurish nausea? Me too. If you just want to download the code and laugh at it, here’s the links:

Remember, there are no guarantees or warranties, life is a mystery, actual results may vary, batteries are not included, and you are 100% liable for any and all bad things that happen anywhere in the universe while playing with this stuff.

If you like it, post a comment. If you don’t like it, post a comment. If you don’t like posting comments, post a comment.

Cheers!

Scripting, System Center, Technology, windows

Thank you, PowerShell!

These are just a few snippets of PowerShell code that helped me out during the past week. There were quite a few others, but these were the easiest examples to show.

UPDATE 1 – Swapped-out Invoke-WebRequest with System.Net.WebClient/DownloadFile in the download script example below, per suggestion of Guy Leech (thanks!)

Adding a User Login to SQL Server

This example adds a domain computer account to a SQL Server instance with “sysadmin” role membership. This example requires the dbatools PowerShell module.

$computer = 'db02'
$account = 'contoso\cm01$'
New-DbaLogin -SqlInstance $computer -Login $account
Get-DbaLogin -SqlInstance $computer -Login $account | Set-DbaLogin -AddRole sysadmin

Adding an Account to a remote Server Administrators Group with Service Logon Rights

This example adds a domain user/service account into the local Administrators group on a remote server, and grants permissions for it to “login as a service” and “logon as a batch”. This could have been accomplished via GPO, but for various reasons that was not an option. This example requires the carbon PowerShell module.

$computer = 'db02'
$account = 'contoso\cm-sql'

$s1 = New-PSSession -ComputerName $computer -Name 's1'
Enter-PSSession -Session $s1
Add-LocalGroupMember -Group 'Administrators' -Member $account
Grant-CPrivilege -Identity $account -Privilege 'SeBatchLogonRight','SeServiceLogonRight'
Exit-PSSession

Creating a Shared Folder Tree

This is basically a template for establishing a “Definitive Source Library”, or centralized content store for things like applications, drivers, scripts, utilities, and so on.

It creates a root-level folder “Sources” on a selected logical drive, sharing it as “Sources$” (hidden share), and then creating more sub-folders below that. The folder structure can be defined in a data file, a GitHub Gist, an explicit variable, or just about anything you prefer. In this example, we used a .txt file.

Example: folders.txt

Apps\7-zip
Apps\Google\Chrome
Apps\Microsoft\O365ProPlus
Apps\Mozilla\Firefox
Apps\Notepad++
Apps\VLC
Certs
DrvPkg
DrvSrc
OSImages
Scripts\ConfigMgr
Scripts\WSUS
Scripts\SQL
Tools

Example: script code… (be sure to change the $DriveLetter assignment to suit your needs)

[CmdletBinding()]
param (
  [parameter()][ValidateLength(1,1)][string] $DriveLetter = "E",
  [parameter()][ValidateNotNullOrEmpty()][string]$InputFile = ".\folders.txt",
  [parameter()][ValidateNotNullOrEmpty()][string]$RootFolder = "SOURCES"
)
$ErrorActionPreference = 'stop'
try {
    $rootPath = "$DriveLetter`:\$RootFolder"
    if (!(Test-Path $rootPath)) { 
        mkdir $rootPath -Force 
        Write-Verbose "created folder: $rootPath"
    }
    $folders = Get-Content $InputFile
    foreach ($folder in $folders) {
        $fpath = Join-Path -Path $rootPath -ChildPath $folder
        if (!(Test-Path $fpath)) {
            mkdir $fpath -Force 
            Write-Verbose "created folder: $fpath"
        }
        else {
            Write-Verbose "folder exists: $fpath"
        }
    }
    $shareName = "$RootFolder`$"
    if ($shareName -notin (Get-SmbShare).Name) {
        Write-Verbose "creating share: $shareName"
        New-SmbShare -Path $rootPath -Name "$shareName"
    }
    Write-Host "finished processing"
}
catch {
    Write-Error $_.Exception.Message
}

Downloading Sample Applications

This example downloads installer files directly from vendor web or FTP sites into the appropriate (directed) local folders (which were created by the preceding example). This also uses a data file to specify the folder path and the associated remote URL path. It’s worth noting that I chose a tilde “~” character delimiter to distinguish from the embedded “=” characters in the URL strings, but you could make this a .csv file and wrap the values in double-quotes (e.g. “http://blahblahblahblah&locale=en-us&#8221;)

Example: downloads.txt

Apps\7-Zip~https://www.7-zip.org/a/7z1900-x64.msi
Apps\Mozilla\Firefox~https://download-installer.cdn.mozilla.net/pub/firefox/releases/70.0/win64/en-US/Firefox%20Setup%2070.0.msi
Apps\Google\Chrome~https://dl.google.com/tag/s/dl/chrome/install/googlechromestandaloneenterprise64.msi
Apps\Notepad++~http://download.notepad-plus-plus.org/repository/7.x/7.8/npp.7.8.Installer.x64.exe
Apps\VLC~https://ftp.osuosl.org/pub/videolan/vlc/3.0.8/win64/vlc-3.0.8-win64.exe

Bonus: additional downloads.txt entries to include SCConfigMgr’s Modern Driver Management tools, Ola Hallengren’s maintenance script, Bryan Dam’s WSUS ass-whoopin script, and positive vibes. 🙂

Scripts\WSUS~https://damgoodadmin.com/download/invoke-dgasoftwareupdatemaintenance/?wpdmdl=326&refresh=5db464511e44c1572103249~Invoke-DgaSoftwareUpdateMaintenance.zip
Tools\DriverAutomationTool~https://gallery.technet.microsoft.com/Driver-Tool-Automate-9ddcc010/file/224244/1/Driver%20Automation%20Tool.zip
Tools\ConfigMgrWebService~https://gallery.technet.microsoft.com/ConfigMgr-WebService-100-572825b2/file/223724/1/ConfigMgr%20WebService%201.8.0.zip
Tools\ConfigMgrWebService~https://raw.githubusercontent.com/SCConfigMgr/ConfigMgr/master/Operating%20System%20Deployment/Drivers/Invoke-CMApplyDriverPackage.ps1
Tools\SQL~https://ola.hallengren.com/scripts/MaintenanceSolution.sql

Example: script code

[CmdletBinding()]
param(
    [parameter()][ValidateNotNullOrEmpty()][string]$InputFile = ".\downloads.txt",
    [parameter()][ValidateNotNullOrEmpty()][string]$rootPath = "H:\SOURCES"
)
try {
    $apps = Get-Content $InputFile
    foreach ($app in $apps) {
        $appdata   = $app -split '~'
        $appPath   = $appdata[0]
        $appSource = $appdata[1]
        if ($appdata.Count -eq 3) {
            $filename = $appdata[2]
        }
        else {
            $filename  = $($appSource -split '/' | select -Last 1) -replace '%20','_'
        }
        $destPath  = Join-Path -Path $rootPath -ChildPath $appPath
        if (!(Test-Path $destPath)) { 
            mkdir $destPath -Force 
            Write-Verbose "created folder: $destPath"
        }
        $destination = Join-Path -Path $destPath -ChildPath "$filename"
        if (!(Test-Path $destination)) {
            # Invoke-WebRequest -Uri $appSource -OutFile $destination
            # modified per https://twitter.com/guyrleech
            [void](New-Object System.Net.WebClient).DownloadFile($appSource, $destination)
        }
        else {
            Write-Verbose "file exists: $destination"
        }
    }
    Write-Host "finished processing"
}
catch {
    Write-Error $_.Exception.Message 
}

Cheers!