Technology, Scripting, Cloud, windows, Devices

Export HW/SW Inventory Data from Intune Devices using PowerShell

What is this recent torrent of Intune gibberish coming from this foul-mouthed idiot? Is he some sort of “expert”? Bah! Nope! I’m just working with it a bit more lately, so I figured I’d brain-dump on it while I can (and to help me recall things if I step away from it for a few months).

Background and Setup

The inventory data for Intune-managed Windows 10 devices is stored in Azure and exposed through the Graph API. And while it can seem challenging to find good examples for accessing it with PowerShell, there is in fact a very nice repository of example scripts on the Microsoft GitHub site at

Given that I’m still learning my way around Intune, and Graph, the first thing I found helpful were the examples ManagedDevices_Get.ps1, and ManagedDevices_Apps_Get.ps1, under the ManagedDevices folder. Both of these were very helpful and I was able to pull the data I needed.

However, since I needed to query 1800+ devices, I noticed the default “page” limit returns only the first 1000 records (devices). Then I found they also posted a nice example ManagedDevices_Get_Paging.ps1, which I merged with the ManagedDevices_Get.ps1, and was able to pull all of the devices at one time. The make part that needs help are lines 179 to 187 (below)…

$DevicesNextLink = $DevicesResponse."@odata.nextLink"
while ($DevicesNextLink -ne $null){
    $DevicesResponse = (Invoke-RestMethod -Uri $DevicesNextLink -Headers $authToken -Method Get)
    $DevicesNextLink = $DevicesResponse."@odata.nextLink"
    $Devices += $DevicesResponse.value

After that, I added the 2 or 3 lines of code to query the installed applications and add those to an output object (a master set of data for each device, including hardware, operating system and applications). I added this to a new function (below) to return the data for further processing.

function Get-DsIntuneDeviceData {
		[parameter(Mandatory)][string] $UserName,
		[parameter()][switch] $ShowProgress,
		[parameter()][switch] $Detailed
	Get-DsIntuneAuth -UserName $UserName
	$Devices = Get-ManagedDevices
	Write-Host "returned $($Devices.Count) managed devices"
	if ($Devices){
		$dx = 1
		$dcount = $Devices.Count
		foreach ($Device in $Devices){
			if ($ShowProgress) { 
				Write-Progress -Activity "Found $dcount" -Status "$dx of $dcount" -PercentComplete $(($dx/$dcount)*100) -id 1
			$DeviceID = $
			$uri = "'$DeviceID')?`$expand=detectedApps"
			$DetectedApps = (Invoke-RestMethod -Uri $uri -Headers $authToken -Method Get).detectedApps
			if ($Detailed) {
				$disksize  = [math]::Round(($Device.totalStorageSpaceInBytes / 1GB),2)
				$freespace = [math]::Round(($Device.freeStorageSpaceInBytes / 1GB),2)
				$mem       = [math]::Round(($Device.physicalMemoryInBytes / 1GB),2)
					DeviceName   = $Device.DeviceName
					DeviceID     = $DeviceID
					Manufacturer = $Device.manufacturer
					Model        = $Device.model 
					MemoryGB     = $mem
					DiskSizeGB   = $disksize
					FreeSpaceGB  = $freespace
					SerialNumber = $Device.serialNumber 
					OSName       = $Device.operatingSystem 
					OSVersion    = $Device.osVersion
					Ownership    = $Device.ownerType
					Category     = $Device.deviceCategoryDisplayName
					Apps         = $DetectedApps
			else {
				$disksize  = [math]::Round(($Device.totalStorageSpaceInBytes / 1GB),2)
				$freespace = [math]::Round(($Device.freeStorageSpaceInBytes / 1GB),2)
					DeviceName   = $Device.DeviceName
					DeviceID     = $DeviceID
					OSName       = $Device.operatingSystem 
					OSVersion    = $Device.osVersion
					Apps         = $DetectedApps
	else {
		Write-Host "No Intune Managed Devices found..." -f green

The full trainwreck can be safely viewed here. Be sure to wear rubber gloves while handling it.

With that, I decided to drop it into a new module to make it easier to access and reuse. I also added a few more functions, with the help of examples from Matthew Dowst and Eli Shlomo and some calls to PowerShell module ImportExcel, by Doug Finke. I named this module ds-intune.


This example was tested on ds-intune 0.3.

Install-Module ds-intune
Get-Command -Module ds-intune

The two functions I’ll use below are Get-DsIntuneDeviceData and Export-DsIntuneAppInventory.

$CustomerName = "Contoso"
$UserName = "<your_AzureAD_UserPrincipalName>"
# be patient, this step can take a while if you have more than 50 machines
$devices = Get-DsIntuneDeviceData -UserName "" -ShowProgress -Detailed
Export-DsIntuneAppInventory -DeviceData $devices -Title $CustomerName -UserName $user -Overwrite -Show -Verbose

As always: Please post comments or corrections, winning lottery numbers, tasteless jokes, and happy thoughts. Here or at the GitHub repo.

Tomorrow I’m off to Ft. Myers for 3 days of work. Wish me luck.


Cloud, Devices, Scripting, Technology, windows

Find Intune Devices with the Jan 2020 Windows 10 CU Patch Installed

I’ve been playing around with a bunch of different code fragments from all over the place, but I think I found a good mix that works. For now at least. Special (huge, gigantic) thanks to Matthew Dowst and Eli Shlomo for code examples which were used to build the following.


  • Intune subscription, with devices being managed with software updates
  • A LogAnalytics Workspace with Update Compliance solution added for collecting telemetry data
  • PowerShell
  • Azure / Log Analytics: SubscriptionId, ResourceGroup, WorkspaceName
  • LogAnalyticsQuery.psm1 (below)
  • Invoke-LogAnalyticsQuery.ps1 (below)


First, I got a huge leg-up with a KQL query from Matthew Dowst to show Log Analytics results from WaaSDeploymentStatus…

| where TimeGenerated > ago(1d)
| where ReleaseName contains "KB4534273"
| summarize arg_max(TimeGenerated, *) by ComputerID
| project Computer, ComputerID, ExpectedInstallDate, DeploymentStatus, DetailedStatus
| render table

From there, I added a few (small) changes to show the OSName and OSVersion. But since each KB is matched to a particular build/version of Windows 10 (e.g. 1903 = KB4528760, 1809 = KB4534273, etc.) I ended up needing to match the query to the build so I look for the relevant data.

Next, I stumbled over this (actually, I stumbled over the cat, and a pair of slippers first, then the code)…

I applied some voodoo magic brain sauce, and sprinkled some caffeine dust on it as follows:

$apiVersion = "2017-01-01-preview"

		Invokes a query against the Log Analtyics Query API.

		Invoke-LogAnaltyicsQuery -WorkspaceName my-workspace -SubscriptionId 0f991b9d-ab0e-4827-9cc7-984d7319017d -ResourceGroup my-resourcegroup
			-Query "union * | limit 1" -CreateObjectView

	.PARAMETER WorkspaceName
		The name of the Workspace to query against.

	.PARAMETER SubscriptionId
		The ID of the Subscription this Workspace belongs to.

	.PARAMETER ResourceGroup
		The name of the Resource Group this Workspace belongs to.

		The query to execute.
	.PARAMETER Timespan
		The timespan to execute the query against. This should be an ISO 8601 timespan.

	.PARAMETER IncludeTabularView
		If specified, the raw tabular view from the API will be included in the response.

	.PARAMETER IncludeStatistics
		If specified, query statistics will be included in the response.

	.PARAMETER IncludeRender
		If specified, rendering statistics will be included (useful when querying metrics).

	.PARAMETER ServerTimeout
		Specifies the amount of time (in seconds) for the server to wait while executing the query.

	.PARAMETER Environment
		Internal use only.
		Adapted heavily from Eli Shlomo's example at
function Invoke-LogAnalyticsQuery {
	param (
		[Parameter(Mandatory)][string] $WorkspaceName,
		[Parameter(Mandatory)][guid] $SubscriptionId,
		[Parameter(Mandatory)][string] $ResourceGroup,
		[Parameter(Mandatory)][string] $Query,
		[string] $Timespan,
		[switch] $IncludeTabularView,
		[switch] $IncludeStatistics,
		[switch] $IncludeRender,
		[int] $ServerTimeout,
		[string][ValidateSet("", "int", "aimon")] $Environment = ""

	$ErrorActionPreference = "Stop"

	$accessToken = GetAccessToken
	$armhost = GetArmHost $environment
	$queryParams = @("api-version=$apiVersion")
	$queryParamString = [string]::Join("&", $queryParams)
	$uri = BuildUri $armHost $subscriptionId $resourceGroup $workspaceName $queryParamString

	$body = @{
		"query" = $query;
		"timespan" = $Timespan
	} | ConvertTo-Json

	$headers = GetHeaders $accessToken -IncludeStatistics:$IncludeStatistics -IncludeRender:$IncludeRender -ServerTimeout $ServerTimeout
	$response = Invoke-WebRequest -UseBasicParsing -Uri $uri -Body $body -ContentType "application/json" -Headers $headers -Method Post

	if ($response.StatusCode -ne 200 -and $response.StatusCode -ne 204) {
		$statusCode = $response.StatusCode
		$reasonPhrase = $response.StatusDescription
		$message = $response.Content
		throw "Failed to execute query.`nStatus Code: $statusCode`nReason: $reasonPhrase`nMessage: $message"

	$data = $response.Content | ConvertFrom-Json

	$result = New-Object PSObject
	$result | Add-Member -MemberType NoteProperty -Name Response -Value $response

	# In this case, we only need the response member set and we can bail out
	if ($response.StatusCode -eq 204) {

	$objectView = CreateObjectView $data

	$result | Add-Member -MemberType NoteProperty -Name Results -Value $objectView

	if ($IncludeTabularView) {
		$result | Add-Member -MemberType NoteProperty -Name Tables -Value $data.tables

	if ($IncludeStatistics) {
		$result | Add-Member -MemberType NoteProperty -Name Statistics -Value $data.statistics

	if ($IncludeRender) {
		$result | Add-Member -MemberType NoteProperty -Name Render -Value $data.render

function GetAccessToken {
	$azureCmdlet = get-command -Name Get-AzureRMContext -ErrorAction SilentlyContinue
	if ($null -eq $azureCmdlet) {
		$null = Import-Module AzureRM -ErrorAction Stop;
	$AzureContext = & "Get-AzureRmContext" -ErrorAction Stop;
	$authenticationFactory = New-Object -TypeName Microsoft.Azure.Commands.Common.Authentication.Factories.AuthenticationFactory
	if ((Get-Variable -Name PSEdition -ErrorAction Ignore) -and ('Core' -eq $PSEdition)) {
		[Action[string]]$stringAction = {param($s)}
		$serviceCredentials = $authenticationFactory.GetServiceClientCredentials($AzureContext, $stringAction)
	else {
		$serviceCredentials = $authenticationFactory.GetServiceClientCredentials($AzureContext)

	# We can't get a token directly from the service credentials. Instead, we need to make a dummy message which we will ask
	# the serviceCredentials to add an auth token to, then we can take the token from this message.
	$message = New-Object System.Net.Http.HttpRequestMessage -ArgumentList @([System.Net.Http.HttpMethod]::Get, "http://foobar/")
	$cancellationToken = New-Object System.Threading.CancellationToken
	$null = $serviceCredentials.ProcessHttpRequestAsync($message, $cancellationToken).GetAwaiter().GetResult()
	$accessToken = $message.Headers.GetValues("Authorization").Split(" ")[1] # This comes out in the form "Bearer <token>"


function GetArmHost {
		[string] $environment

	switch ($environment) {
		"" {
			$armHost = ""
		"aimon" {
			$armHost = ""
		"int" {
			$armHost = ""


function BuildUri {
	param (
		[string] $armHost,
		[string] $subscriptionId,
		[string] $resourceGroup,
		[string] $workspaceName,
		[string] $queryParams

	"https://$armHost/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/" + `

function GetHeaders {
	param (
		[string] $AccessToken,
		[switch] $IncludeStatistics,
		[switch] $IncludeRender,
		[int] $ServerTimeout

	$preferString = "response-v1=true"

	if ($IncludeStatistics) {
		$preferString += ",include-statistics=true"

	if ($IncludeRender) {
		$preferString += ",include-render=true"

	if ($null -ne $ServerTimeout) {
		$preferString += ",wait=$ServerTimeout"

	$headers = @{
		"Authorization" = "Bearer $accessToken";
		"prefer" = $preferString;
		"x-ms-app" = "LogAnalyticsQuery.psm1";
		"x-ms-client-request-id" = [Guid]::NewGuid().ToString();


function CreateObjectView {
	param (

	# Find the number of entries we'll need in this array
	$count = 0
	foreach ($table in $data.Tables) {
		$count += $table.Rows.Count

	$objectView = New-Object object[] $count
	$i = 0;
	foreach ($table in $data.Tables) {
		foreach ($row in $table.Rows) {
			# Create a dictionary of properties
			$properties = @{}
			for ($columnNum=0; $columnNum -lt $table.Columns.Count; $columnNum++) {
				$properties[$table.Columns[$columnNum].name] = $row[$columnNum]
			# Then create a PSObject from it. This seems to be *much* faster than using Add-Member
			$objectView[$i] = (New-Object PSObject -Property $properties)
			$null = $i++

Export-ModuleMember Invoke-LogAnalyticsQuery

Then I built an array / list (okay, a stupid nested array like a noob, geez) to match the OS versions to the respective KB numbers. The KQL query also has an added line for OSVersion, and the project statement adds OSVersion and OSBuild to the output stream.

param (
	[string] $WorkspaceName = "<your workspace name>",
	[guid] $SubscriptionId = "<your subscription id>",
	[string] $ResourceGroupName = "<your resource group name>"
if (!(Get-Module LogAnalyticsQuery)) { Import-Module .\LogAnalyticsQuery.psm1 }

$kblist = (('1903','KB4528760'),('1809','KB4534273'),('1803','KB4534293'),('1709','KB4534276'))

$results = @()

foreach ($kbset in $kblist) {
	$query = @"
	| where TimeGenerated > ago(1d)
	| where OSVersion == "$($kbset[0])"
	| where ReleaseName contains "$($kbset[1])" 
	| summarize arg_max(TimeGenerated, *) by ComputerID
	| project Computer, ComputerID, OSVersion, OSBuild, ExpectedInstallDate, DeploymentStatus, DetailedStatus
	| render table 
	$params = @{
		Query          = $query
		WorkspaceName  = $WorkspaceName
		SubscriptionId = $SubscriptionId
		ResourceGroup  = $ResourceGroupName

	$results += ($(Invoke-LogAnalyticsQuery @params).Results)


Putting these both in the same folder as LogAnalyticsQuery.psm1 and Invoke-LogAnalyticsQuery.ps1 (respectively), I can run it to compile results and pump them out to a gridview …

.\Invoke-LogAnalyticsQuery.ps1 | Out-GridView

Or output to an Excel workbook using Doug Finke’s ImportExcel PowerShell module…

.\Invoke-LogAnalyticsQuery.ps1 | Export-Excel -Path "c:\reports\CU-installs.xlsx" -Show -WorksheetName "Installs" -ClearSheet -AutoSize -AutoFilter -FreezeTopRow


Technology, windows

Quick Assist – The Overlooked Remote Assistant

Troubleshooting and assisting remote computer users has always been a challenge. For years, with earlier Windows versions, we had Remote Assistance. But many organizations never gave it much attention, and instead skipped right over it to third-party products like TeamViewer, LogMeIn, <fill-in-name>VNC, and others. Many are of them are fine and do a great job. Many of them also impose feature limits unless you pay for the “professional”, “enterprise” or “premium” edition, etc.

Since Windows 10 1709, Microsoft added another alternative to Remote Assistance, and I’m surprised how many sysadmins have never heard of it.

It’s called Quick Assist.

This came up during a call with a customer who uses Intune and they asked about the TeamViewer feature. Like other customers I’ve spoken with, they had the impression that there’s really no other alternative. But there is (are), and even if you opt out of using Quick Assist, there are other free tools available which may do what you need. This jaw-jacking mind spewage however is focused on Quick Assist, so let’s get in and go for a ride.

Quick Assist is not a “headless”, or unattended, remote connection solution. It requires the end user to be at their computer and logged on. It’s also fairly simple to use, and only requires the following conditions in order to use it:

  • Both the end user and the IT technician need machines with an Internet connection.
  • The end user needs to be present (and logged on) to the target computer.


The process goes like this:

  1. The technician opens Quick Assist (Win key + “Quick” and select the app)
  2. Technician clicks “Assist another person”
  3. Technician provides credentials to authenticate to the organization
  4. Technician provides the one-time access code to the user: By voice (over the phone), by email, Twitter DM, Facebook Messenger, SMS text, carrier pidgeon, or crackheads on stolen bicycles who need to earn some extra cash.
  5. The end user enters the one-time access code, and clicks “Share Screen”
  6. The Technician is prompted to choose either “View Screen” or “Take Full Control”
  7. The end user reads the long, boring warning message that the remote user could be ISIS or Putin, and then clicks “Allow” to grant the Technician permissions, with finger’s crossed and eyes closed.
  8. The Screen Sharing session begins.
  9. The Technician begins extracting personal information from the end user computer, assumes their identity and drains their bank accounts within 10 minutes, while calmly repeating “almost done, just a few more clicks“. Just kidding. We’re all honest here, right? See? This is why I don’t work at crisis call centers.

From here, the Technician can perform any task they would if logged onto the physical workstation (or virtual machine). Remember when Remote Assistance would blank-out the Technician screen if a UAC prompt was triggered on the end user desktop? Not with Quick Assist.

That’s right, the Technician is free to roam around and deploy whatever diabolically-destructive dastardly deviations they desire, and blame it all on the poor end user. That is, of course, unless the end user agrees to whatever terms the Technician demands of them.

Oh wait, that’s the wrong movie plot. Uhhh. I mean, from here, the Technician can troubleshoot anything related to Windows, applications, management tools, like MECM or Intune, Group Policy, or good old fashioned disk space vaporization from truckloads of vacation photos and video clips. But let’s see some juicy screen shots…

Oooh! Screen shots (high tech)

Step 1 – Technician Desktop

Step 2 – Technician Desktop

Step 3 – Technician Desktop

Step 4 – Technician Desktop (note the countdown timer)

Step 5 – End User Desktop

Step 6 – End User Desktop

Step 7 – Technician Desktop

Step 8 – End User Desktop (panel minimizes to top of screen)

Step 9 – Technician Desktop

The menu bar links are normally shown as icons without labels, unless you click the “. . .” link to the far-right (Details). This will toggle the text labels on and off. “Details” seems weird. Why not “Show Labels” like 99.999999999% of other apps do? Because…

From left to right:

  • Select Monitor – For toggling between multiple remote monitors (end user machine)
  • Annotate – This is supposed to allow the Technician to “draw” on the end user’s desktop to help guide them where to look or click on things. However, in my tests it was hit or miss (mostly miss). Your mileage may vary.
  • Actual Size – Zoom the Technician display to 100% , which may cut-off and add scroll bars to the window frame
  • Toggle Instruction Channel – Opens the super-cheap chat tool (see rant below)
  • Restart – Restart the remote (end user) computer
  • Task Manager – Opens the Windows Task Manager on the remote computer
  • Pause – Pauses the Technician screen session, blanks it out (black) with a message “You have paused screen sharing” / also Toggles “Pause” to “Resume”
  • End – Ends the screen sharing session
  • Details – Toggles the menu bar labels on or off

Technical Stuff

  • While Quick Assist is active, the application process will show as “quickassist.exe”, running from %windir%\system32.
  • You can effectively block Quick Assist using Group Policy or 3rd Party products using the filename and path. But why would you want to?

Some Caveats

  • Quick Assist won’t help you if the remote machine isn’t turned on, not connected to the Internet (or your WAN or LAN), or when the remote machine won’t boot at all, for whatever reason, like being left in the trunk of a car or on the driveway in the rain.
  • The end user may not see chat messages sent by the Technician via the “Toggle Instruction Channel” feature, because the little clipboard icon on the end user machine will only show a red dot. The minimized panel also doesn’t stay on “top” (z-order) of other application windows, so the end user may need some coaching to find it.
  • The “Toggle Instruction Channel” is some sort of weird, bastardized, Tide pod ingesting, hallucinogenic imploded derivative of a chat tool, but without any chat history. So each message is the only one visible, on either end. Which is weird. I mean, this is 2020, and attorneys still play golf with money billed from customers who couldn’t find records of conversations to prove their actions in court, which is kind of why chat history is important. But whatever.
  • The Task Manager button is nice, but why not provide a CTRL+ALT+DEL button, or buttons for Computer Management, Regedit and Explorer?
  • It’s obvious, to me anyway, that Microsoft could easily bundle a really robust remote troubleshooting tool into Windows, but probably doesn’t want to start a bar room fight with partners. Not yet anyway.

What’s really amazing is that I could only find five (5) things to bitch about regarding Quick Assist. I’m sure there’s more, but the bullets above are enough for me. Aside from that stupid rant above, the rest of Quick Assist is pretty cool, and it could quite possibly bail your ass out in a pinch.


Cloud, Projects, Scripting, Technology

Windows Terminal with French Fries

Windows Terminal has been progressing quietly, and yet many IT folks are not even aware of its existence. This is a shame. I personally feel it should be included with Windows 10 and Windows Server by default (and still get the CI/CI pipeline of updates regardless of Windows updates), but that’s just my humble opinion.

Besides the cool tabbed frame, and being able to easily toggle between different PowerShell, Bash (WSL), and CMD sessions, you can customize a lot of things about Windows Terminal. And geeks love customizing things.

Just to be clear, Terminal is not a replacement for PowerShell, PowerShell ISE or Visual Studio Code. It’s more like an enhanced PowerShell console, combined with CMD, and Azure CloudShell, if you wish. It’s not a code editor. It’s a code execution tool, or an extension thereof. Terminal available for installation on Windows 10 (1903 or later) only, for now at least.

To start customizing things, click the small tab to the right of the heading bar, with the down-arrow, then click “Settings”. I strongly advise you to make a backup copy of your existing profiles.json file before customizing it, just in case.

There’s all sorts of things you adjust, and the “profiles” section is where you can create and configure the properties for each tab selection. In the example above, I have profiles for “PowerShell 5.1”, “MarvinShell 5.1”, “PowerShell Core 6.2”, “PowerShell 7 preview”, “Azure Cloud Shell” and “Command”.

You might assume that I only customized the background images for each tab/profile to be cute. I’m not that cute. It’s actually to help me visually identify what my code is currently running on, as I switch between tabs, which also have the names shown for clear identification.

First off, if you notice the “schema” property references a URL to “; which provides all of the possible attributes, particularly within the “profiles” section.

You can see the matching of attributes between the profiles.json example (above), and the schema template (below). Things like “acrylicOpacity”, “backgroundImageOpacity” and so forth. The “enum” lists beneath each attribute in the template provide the expected values, or types of values, you can assign to each.

Some of the examples I’ve run across which demonstrate the “backgroundImage” attribute, use a local image file. I would recommend against that, unless you point to a location which is backed up somewhere automatically (the Cloud). I prefer to put my image files online, such as under my GitHub account. That way, no matter where I apply this “theme” (if you will) it works, as long as I have an Internet connection.

Also, you may find you need to play around with the “backgroundImageStretchMode”, and “backgroundImageOpacity” settings based on the background image you are using. It’s sometimes trial-and-error getting it to look the way you want it. But once you have it just right, save your profiles.json somewhere for recovery (but don’t overwrite your default vanilla profiles.json backup)

Why Bother?

So, the questions I hear quite a bit are, “why bother with Terminal?” and, “why bother customizing it?

Regarding the “why bother with it” question: it can be a useful tool when testing and debugging your PowerShell scripts and modules in different versions of PowerShell, as well as Azure Cloud Shell.

Regarding the “why bother customizing it” question: nerds typically like to customize their tools and environments. It makes us feel human, rather than part of a cold machine. However, I personally prefer having a visual indicator of which environment I’m running my code in, especially when switching between tabs. The tab labels alone should suffice, but my brain needs that extra oompf! to get the point sometimes.

In any case, Terminal is easy to install. It’s fast and convenient. It offers quite a bit of flexibility, and it gives you control over how you want it to look and feel.

More Information

Terminal project repository

My customization stuff

Customization How-To:

Project documentation:

Terminal JSON documentation:

Install Terminal from the Microsoft Store:

Install Terminal using Chocolatey: cinst microsoft-windows-terminal

business, Devices, Scripting, System Center, Technology, windows

5 Things You Should Have Automated by Now

The Long Back-Story

(queue the campfire scene, under the stars, with distant harmonica and bearded old man, smoking a pipe of something, and all the little systems engineers, all gathered around to listen in their fuzzy pajamas)

For the last three decades, the roaming bean-counters of the world have quietly been building up a pressure-cooker of angst from all the walk-up status inquiries in the IT cube farms of the world. Each time they’d ask for a status update, they’d get a magical (mythical) answer. Specificity was lacking. Upper management was not happy. Vendors kept nodding in agreement, but were still focused on the product users, not the check-writers. That changed soon after the Cloud popped up.

I may blog about my thoughts on “The Future of the IT Worker”, if I have enough wine or beer to motivate me.

Short version: Shareholders buy stock in a company to make a profit on rising value (stock prices). Stock prices rise when the company increases profits. To increase profits, the company can only increase the gap between revenue and expenses. For 99.9% of businesses, IT is a “cost center”, or an expense. Shareholders DGAF* about imaging computers, change management reviews, or what your name is. They care about 2 things:

  • Increased profit margins
  • No bad press

Both of those points are impacted by expenses. Shareholders don’t like expenses. They bitch about expenses, a lot. They hire consultants to analyze expenses, and these days, one of the first areas they look is IT. Asking question like:

  • Why so many IT staff?
  • Why are you re-imaging every computer you buy, when they already work?
  • Why do you still have datacenters?
  • Can we move to a cheaper lease?
  • Training?! You don’t know this stuff already?

Seriously, the emphasis on “what value do you bring to the company?” is only going to get heavier and heavier.

So, in the interests of making yourself more valuable, I suggest bringing a little automation to your job. And, based on what most customers I know have already implemented, this is my 5-point list of gotta-have things:

[1] Active Directory User Account Processing

New hires. Temp staffing. Terminations. Name changes. Promotions and transfers. All of these tend to chip away at your precious time. Relying on a bundle of task-specific scripts is a good start: creating accounts, resetting passwords, adding/removing group members, and so on. But anything you have to stop and tend to with your own hands needs to be considered for automation.

Like all automation processes, it starts with the “authoritative source” of information. Usually HR. Whatever data they’re entering for a new hire, use that to drive everything else. Do not duplicate efforts by entering that information again somewhere else, as this not only wastes time, but adds risk of inconsistency.

If you don’t already have it, request access to whatever information you need to drive the entire process along. Make a list of all the user-related processes you deal with. Divide each process into distinct phases or tasks and work on them one at a time until you have the whole conveyor belt running.

Ideally, when HR says someone has been hired, your IT systems should immediately handle it. Changed departments? New surname? New job title? Done. Got fired for having sex on a forklift during work hours? Done.

Gaining experience with the HR systems and processes not only makes your job easier, it makes your role more valuable in the organization. Once the processes are automated, they will run more consistently and predictably, even if you go on vacation, and the organization will likely ask you for help automating other processes.

[2] Active Directory Computer Accounts Clean-Up

If you only have a dozen or so computers in your AD domain, you might get a pass here. But if you’re managing dozens, hundreds or thousands of computers, and you’re not running some sort of automated process to clean-out stale/unused accounts, you should be tasered in the crotch until the battery goes dead.

If you don’t already have something in place to automate this boring-ass chore, get moving. It’s really easy to implement a 3-step clean-up process:

  • Determine what criteria will be used to say a device account is stale
  • Identify and move stale accounts to an OU, and disable them
  • After X days, delete them

Once that process is tested, schedule it to run on its own.

There are hundreds of utilities and scripts available today to help automate this process, or you can build your own. Having a process in place means you can answer questions about asset inventory with a straight face, and calm down those bean-counters who freak out over the thought that things are out of control. “Relax, bean-counter person. I have it under control.

Icing on the cake: “I know we requested 1500 licenses of that software, but I confirmed we only need 1250. And with that $3000 I saved us, I’d like to attend MMS MOA this year, and buy a Hello Kitty flamethrower.

[3] Patch Management

The biggest problem I see today isn’t the patching itself, or the tools available to manage the patching. The biggest problem I still see is a lack of a process or procedure. If you’re still manually updating computers, especially endpoint devices (desktops, laptops, tablets, etc.), but even servers, pause here and do the following first:

  • Design a patching process: What, When, Where, and Who (owns each machine or system)
  • Give each group of machines or systems a name
  • Identify test machines within each group to validate monthly patches
  • Identify machines that can be patched at the same time, and which ones cannot.
  • Identify when machines can be rebooted

Having that mapped out will make it so much easier to pick and test the right solution (product or script).

After that, use your selected “test” machines for the initial pilot, and scale out from there. Start with the less critical machines and add the more critical machines later. That way you cover more machines early on, and work out the kinks before touching the high risk environments.

In the VAST majority of environments I’ve seen, the exception cases are the minority. So knocking out the machines with a consistent schedule also knocks out the biggest portion overall.

[4] Inventory Reporting

Fancy or basic, it doesn’t matter. The only thing that matters when the executives ask “how many ___ do we have?” is can you answer the question without lying your ass off. The other thing that matters, is when the BSA* comes to your door with a warrant, but that’s another story altogether.

How anyone can manage a computing environment without some sort of inventory reporting is beyond reason. That’s like expecting airlines to operate without flight plans.

Of all the examples listed on this post, this one is the oldest of them all. And since it’s been around the longest, there’s really no acceptable excuse to not have it automated by now.

If you don’t have a software product, or service, in use, get one. Many are free. If they don’t cut it, you can easily build your own with scripting and duct tape. Even if your devices are scattered across the globe, as long as they can touch the Internet, you can build something to make them squeal and give up their inventory data.

[5] Event Monitoring

Imagine if your car didn’t have a dashboard. Or your smartphone didn’t have a battery indicator. That’s pretty much the same thing when you manage computers without some sort of event and/or log monitoring. The data is being tracked somewhere, but unless you have a clear view of it somewhere, you’ll never know. Until it all goes sideways, and then you’re scrambling to find out where to look “under the hood” as the house is burning down.

Of all the support cases I ran into between 2015-2019, which related to some sort of “oh shit, our shit is broke! please help fix!“, most of the root causes fell into one of the following buckets:

  • Ran out of disk space
  • Service account was locked
  • Service failed to start
  • Configuration change impacted other processes
  • Network connectivity failure
  • Anti-virus was blocking a critical process

Every single one of these could have been avoided with the following simple tools:

  • A monitor to report potential problems
  • An automated process to remediate each of the potential problems before they get worse

Flying blind is no way to run a datacenter, let alone a bunch of computers. Whether you prefer to buy a solution, or build it yourself, just get something in place. In every instance where this was done, the number of “oh shit!” events dropped significantly.

Maybe you like getting a panicked call from a manager on the weekends, at 3am on a weekday, or while you’re on vacation. That’s not my idea of a happy life. And applying some basic automation to monitoring is not only one of the easiest types of automation, it’s often a good on-ramp to scaling your efforts into other areas that drain your time every day.


business, Projects, Technology

The Top 20 IT Problems I Still See at Most Organizations

[NSFW-Warning] The following list of douche-brained issues are not in any particular order, so some items may be more or less important to you than others, regardless of their assigned number. I just needed a way to count them after finishing a glass of whiskey (with an “e”).

Most, if not all, of these are systemic, and are human-related. Most of these appear in groups, rather than isolated, which points to lack of coordinated management from the top, but that doesn’t excuse those at the bottom by any means. Ultimately, continued disregard of these will lead to what some analysts refer to as your shit will become entirely fucked.

Remember the 2 most important rules:

  • Good | Fast | Cheap >> pick 2
  • TARFU >> SNAFU >> FUBAR >> GFO (the “G” is for “Goat”, the animal)

In many cases, failing to address these items can land your organization on front-page news in a bad way. And shareholders do not like front-page bad news.

20 – Turning off Firewalls

A lot of organizations have adopted the habit of turning off client firewalls entirely. Even if you use a network firewall, turning off client (and especially server) firewall services is a bad idea. It’s always best to identify port and protocol requirements for whatever apps and services need to communicate, and allow those through the firewall as needed.

Leaving the firewall completely off is like leaving the bathroom stall door wide-open while you’re taking a dump at Walmart. If that’s your thing, more power to you, but I hope you stock up on antibiotics and chap stick. I mean, seriously, if your application needs to blabber on port TCP 1433, and only inbound, then open TCP 1433 inbound. It’s not that hard to do.

19 – Turning off UAC (Windows 10)

Similar to turning off the firewall service, this is a bad idea. Yet, I still see this fairly often. UAC, or User Account Control, is a protection service that intercepts requests for system-wide resources, and prompts for administrative approval before continuing. It’s the bouncer at the club. Unless you like fully-armed meth addicts wandering into your kid’s birthday party, you should keep UAC turned on.

If you use an application that requires you to disable UAC, get another app. If there isn’t another app, lean on the vendor REAL hard to stop drinking the bongwater and get off the sofa to fix it. It’s a clear sign of lazy programming, and it’s your money, so you have a right to complain.

18 – Ignoring Firmware Updates

This often smacks people in the face during device imaging (aka provisioning). “Why won’t this )#$*ing thing PXE boot?!” Yeah, check the BIOS version. Visit your network, storage and security appliances on a regular basis to check their firmware too.

Hell, check all your shit. You’re probably jaw-jacking about last night’s sports game anyway, or wasting time on Facebook. While you’re busy arguing about Trump and Pelosi, hackers are applying Vaseline to your security holes, and well, that just sounds bad already.

17 – Installing Multiple Anti-Malware Products

That was the “in” thing to do in the early 2000’s, when you had Justin Bieber posters on your wall, and spent evenings updating your MySpace page.

But today’s products have grown up. While most of them do a very good job of protecting your system from being hacked, the built-in Windows Defender product has quietly stepped up to the lead of the pack.

If you’re using Symantec, McAfee, Cylance, Crowd Strike, Carbon Black, Sophos, Malware Bytes, Jimmy’s Atomic AntiVirus, or whatever, you really should do a careful evaluation of what you really need. You could save a lot of money and headache in the end. And I just hate the shit out of McAfee. I had to get that in before I move on to the next item.

In most cases, if there is someone with a job title that references the product (e.g. “McAfee administrator”) it will be tough prying them away from it, because it will feel like a direct threat to their job. Be tactful. If that doesn’t work: try pepper spray.

16 – Not Keeping Active Directory Clean

I would estimate that in the last 10 years, 90% of customers I’ve encountered don’t apply rigorous controls over Active Directory user and computer accounts. The portion of stale accounts often ranges between 10% to 30%, and often includes virtual machine accounts.

I’m not going to dive into methods for cleaning up AD, or keeping it clean, just Google it. There’s a metric shit-ton of free solutions out there. If you prefer to write your own using PowerShell or something else, that’s pretty easy as well. But first: get a process in place and then build the solution.

The most common approach I’ve seen (and recommend) is a 2-step offboarding process:

  • Step 1: Identify accounts that you determine are obsolete, disable them and (optionally) move to a special OU.
  • Step 2: Delete them little bastards. (I recommend yelling out “Die you little bastards!!!!” with a diabolical laugh)

When you’ve tested it enough, set it to run as a scheduled job. Don’t forget to include logging.

15 – Not Keeping Up With Software Updates

This is by far THE most common issue in most organizations, big or small. And I’m pretty sure this doesn’t surprise you either. But the most common problem I see is not having a patching process defined. At the very least, I recommend identifying what your patching process needs to include:

  • Products to be patched
  • Machines to be patched
  • Business or process schedules to work around
  • Who “owns” the products and machines/systems
  • Where are the machines (on-prem, roaming, cloud, etc.)
  • Test machines and test users

Once you have the what, when, where, and who, you can look at the how. You don’t need to worry about “why” unless you’re dealing with complete idiots. Compare patch management products carefully to see which ones fit your needs, and your budget. Pilot test them and confirm the results. Start small and scale out. Then start working on your drinking skills, because you’re going to need them.

14 – Doing More Configuration Work Than Necessary

This is another thing I see mostly with regards to imaging computers. Custom Start menu and Taskbar layouts, shortcuts, wallpaper backgrounds, screensavers, legal banners, yada yada… I recommend having a sit-down with your users and ask some really basic questions, like:

  • Are you so stupid that you really need “us” to prepare your start menu for you?
  • Do you really need us to make shortcuts to the company intranet web site?
  • Are you smart enough to respect our advice to use Chrome or Firefox for most of our web sites and not the built-in Edge browser?
  • Did you know that legal banners at login don’t really hold up in court anymore?
  • Are you still awake? Is this thing on? Hello? (thump thump)

In the past 5 years, I’ve managed to get 3 customers to meet with their users and their upper management, and eliminate a good chunk of the silly training wheels and plastic helmet crap they were spending days keeping up to date. In the process they also gained a bit more respect from the users who no longer viewed IT as condescending control-freaks, but as enablers.

And if that doesn’t work, a pipe wrench should.

13 – Paying for Products and Services Not Needed

Custom apps are fine, when they fill a need that just doesn’t exist in the base operating system, or another product you already have. Supporting multiple, competing products, can lead to all kinds of headaches. From deployment and updates, to licensing, training, support requests, and more. In every single case I’ve encountered, the reason given was “we’ve always used it”, which isn’t technically a bad reason. But failing to revisit alternative options is technically a bad habit.

I’ve seen companies where different divisions used different products for the exact same function. As it turned out, just by picking ANY one of the products as the company-wide standard, would have saved them a significant amount from volume discount licensing. In several other cases, customers were paying for client backup software licenses, even though they had cloud drive synch products installed and used.

Other examples: Visio Pro, and Project Pro, when Standard licenses would have been just fine. PDF editors, when viewing and printing was all they really needed. The list goes on. TIP: Consider a Bounty program (see below)

12 – Granting Local Admin Rights by Default

I’ve blogged about this several times already. Most recently here. In short, don’t do it. Let them complain, after all, it’s for the good of the company. Or better yet: implement an embarrassment protocol for requesting admin rights. Something like having to come to work dressed as a giant sex toy, or wearing diapers on their head all day. Actually, you better check with HR first. And if they approve, ask if you can carry a sidearm to work as well.

11 – Insufficient Training

Technology is changing faster than ever before. Training is becoming a continuous pipeline. If your employer values what you do for them, they’ll value keeping you on top of it as well. If they don’t, then maybe it’s time to move on. There are employers out there who value their IT staff.

10 – Insufficient Staffing

Whether you’re any good at what you do, or not, eventually, you may end up with too much on your plate. If you spend most of your time keeping things running, rather than exploring ways to improve things and lower costs, then you may need a few more people to handle the load.

I’ve blogged about this quite a few times as well. The ever-popular “do more with less” mantra is becoming stupid. Not only is it bad for employee morale, and productivity, it’s even worse for the employer because it creates greater risk to business continuity (proverbial “hit by a bus” scenario).

If upper management doesn’t care about turnover rates, then it’s your signal to move on.

9 – Lack of Documentation

Often a sign of insufficient staffing (see previous item), this is usually from neglect due to lack of time. But when it suffers from lack of having defined policies and procedures, then you have bigger problems. Make a hit list of things to document first, and get to work. Even if you just get outlines and brief notes written down, that’s better than nothing at all.

If you suffer from lack of staffing, here’s your chance to prepare for a new hire. Make sure you provide enough detail to hand to the new person and let them run with it. You’ll be glad later on, that you took the time to do it.

8 – Insufficient Change Control

Everyone hates change management. It’s a necessary evil, especially in larger organizations. Many small shops can get away with minimal CM effort, but even then, it pays to keep track of changes and dates. So many times I’ve seen it point to a root cause for those “It just started failing. But we didn’t change anything” situations.

7 – Avoiding Cloud Services

Without sounding like a salesperson, not only is the Cloud here to stay, it is absolutely the future. We are at the same crossroads that existed when the automobile was sharing the roads with horses and buggies. Love it or hate it, get used to it. The sooner you get familiar with it, the better off you’ll be (and the more marketable your skills will be)

Don’t worry about feeling left behind or hopelessly lost. There’s plenty of online training and tutorials to help you. And everyone, I mean everyone, is still on a continuous learning conveyor belt just trying to keep up. It’s evolving and improving every day, so nobody knows it all.

6 – Using Servers as Desktop Computers

It’s one thing to use a “jump server” for isolating access to production environments. But you shouldn’t use production servers like a routine desktop computer because it adds more junk and overhead with each user profile and user session. It also opens up more risk for hacking exploits and malware.

Windows Servers in particular were designed for remote management. Make use of that. Whether it’s MMC, Server Manger, Windows Admin Center, CLI or PowerShell, there are many ways to easily manage remote servers.

5 – Treating Users Like the Enemy

It’s seems to me (anecdotally/subjectively) that about 25% of organizations with more than 200 users develop an “us vs. them” culture. Either users hate IT, vice versa, or both. Forcing conditions on users without giving them an opportunity to request changes only makes them unhappy, and less productive. You should want your organization to thrive. If not, you’re in the wrong job. Quit now.

Rather than making one-sided assumptions about what should or should not be included in their standard computer setup, start having conversations with them about what they like and don’t like. Take notes. Ask deeper questions. Even if you end up not making any real changes, it instills a sense in them that your group cares. That alone can go hundreds of miles when you need their help later on for a big project. Users as allies is always better than users as enemies.

4 – Not Selling Themselves to Upper Management

Not getting the budget you asked for? Not getting anyone to approve another position to hire? Not getting approved for training? Not getting approved to modify an old process?

Here’s why that is: Suits talk, read, think and dream in dollar figures. You have to convert nerd-speak into dollar-speak, or it doesn’t work. But beyond just translating your request for new software into dollars (costs), you need to show believable calculation of how it will either [A] lower operating costs, or [B] generate more revenue without incurring equal-or-greater costs. That’s it. Oh, and pour some chart sauce on it. Suits love chart sauce.

3 – Insufficient Monitoring and Reporting

So many times I ask “How is <fill-in-product-name> working for you guys?“, they respond with, “It’s working good“. Then I follow that with, “How do you know?” The responses from here can be interesting. Everything from “Well, it hasn’t broken-down or anything” to “It seems to work okay“.

But the only answer I want to hear is “Our monitoring and reports indicate it’s working fine” or maybe “Our monitoring shows it’s working fine, but we could use a few tune-ups to make it better

The flip-side to this is alert-overload, where monitoring is excessive and sending so much information to the IT staff that they soon ignore it.

Anyone familiar with monitoring, regardless of product, will agree that it’s somewhat of an art to find the ideal balance of alerts and reports. Just enough to feel confident in what’s happening, but not too much to keep up with. However, if I had to err on one side or the other: too much, or none, I’ll go with too much any day. At least that can be tuned. Having no monitoring solution in place is just flying blind and asking for trouble.

This doesn’t mean you need to buy an expensive monitoring solution, but many of them are certainly worth the cost if you need the features they provide. You can get by using built-in tools like Event Forwarding, PowerShell scripting, and the old Task Scheduler. The three of those, along with a few cans of Red Bull, some sugary snacks, and a few cups of double espresso, and you can build amazing things.

But many shops still have absolutely nothing in place to alert them when something fails. This is very risky and easy to avoid.

2 – Poorly Maintained Network Infrastructure (and DNS, DHCP)

We’ve all heard the jokes about “it’s never DNS” and how often we find out it was DNS. But I still see problems due to neglect. The most common issue I see is a perception that DNS just takes care of itself. A Ron Popeilset it and forget it” service that needs no TLC. This is a bad perception. That’s like saying your car changes its own oil (you Tesla owners just be quiet and go along with me here, mmkay?)

Some typical things that shine a light on lack of oversight:

  • New subnets created without being communicated
  • Changes to VLAN’s and DHCP scopes without being communicated
  • Firewall changes without being communicated
  • Firmware updates affecting routing
  • Subnets without a DHCP server (and no routing)
  • PXE across multiple subnets without IP helpers
  • Not reviewing DHCP scope settings after shifting device usage from static to roaming (older environments)

1 – Lack of Concern About Security

“Nobody would bother hacking us”. To me that sounds like “This ship could never sink!” While many shops go crazy with anti-malware products, there are also those who just don’t use anything. It sounds crazy, and some consultants I know refuse to believe these unicorns really exist. They do.

Besides the endpoint protection aspects, cyber-security as a whole is a big topic, and more than I can cover even if it were the entire blog post subject. But…. the emphasis on security has to start from the very top. The numero uno golf-player in your organization has to be the one pushing it, or it doesn’t work. There are exceptions where the grass-root side works, but it’s not the norm. Because it takes a concerted effort from IT, users, partners, customers, and suppliers to make a dent.

The news is full of stories where shit imploded in full public view because someone missed something and now it’s too late. You’d think with so many still showing up in headlines, that by now there’d be almost no one left who isn’t working furiously to close the loopholes. You’d be wrong.

Part of the issue (in my humble opinion), in the US at least, is the lack of consequences those companies face when they suffer a major breach. Especially when the breach impacts customers. CIO’s are rarely held accountable in a meaningful way (e.g. getting a $500k severance package isn’t a real punishment for failing the company)

Did someone say Bounty Program?

If you’re not familiar with the term “bounty program”, you may at least be familiar with something that tech companies like Microsoft and Google do where they pay $$$ to hackers who find and report vulnerabilities (privately of course) with respect to their products.

Companies (organizations in general) can also adopt a similar approach by offering employees a reward (bonus pay, added vacation, etc.) for reporting vulnerabilities in your IT services and tools. If you do consider this approach, be sure to meticulously document the rules and consult your legal counsel, and HR department.

I have had conversations with customers who have used this approach with success, and it can be very helpful if planned and managed properly. Also, don’t limit the participation to only IT staff. Encourage any employee to participate.

It’s 2020. Make your checklist and get busy!


Bye Bye 2019. Hello 2020

My blogging has slowed considerably this past year. Partly due to so much going on that didn’t have time to let my brain wander, and partly due to my anonymity getting blown, and that makes it more difficult to share a lot of raw thoughts. At least until I win a big-enough PowerBall lottery pay-out, if there is a “big-enough”.

Another part of it is the purely technical stuff, with so many others pumping out quality content this year, that I don’t feel the urge to attempt more “me-too” in that space. Don’t worry, I’m not doing a “me too” for being sexually abused by a famous person. Although, that might not be a bad idea. Let me check my contacts list, hold on…

ok. No blackmail material. I’m still broke.


2019 was a blur. In a nutshell…

  • Daughter no.1 had a baby (our first grandbaby)
  • Daughter no.2 got married
  • Daughter no.3 got married and moved to another state
  • Daughter no.3 and her husband moved again, to Italy
  • Son graduated college and started working
  • Son turned 21
  • I got to present at MMS Jazz Edition
  • I got to present at PowerShell Saturday
  • I got to present for a few online user groups
  • I stepped up woodworking efforts a bit

In the process, I wiped out our savings, obviously, so our plans to move have been delayed again. I had to back away from attending local user groups more than I wanted to, postpone some personal travel and other things.

In a lot of ways, okay most ways, I would say 2019 has been one of my best years on this ball of dirt that I can recall. I’ve had bigger spikes in years past, but on the whole, considering January to December, it’s been a fantastic and humbling year.

One of the biggest highlights for me was getting to meet a lot of you in person. Some at PowerShell Saturday in Raleigh NC, and some at MMS Jazz in New Orleans.


Next year, so far, I’m scheduled to speak at the PowerShell + DevOps Global Summit in Bellevue WA in late April. Then, finger’s crossed, I hope I’m accepted for at least one session to present at MMS MOA in early May.

My plans in between are pretty simple:

  • Try to save up again to move (I freaking hate where I live)
  • Travel more
  • Continue my quest to sample every beer ever brewed
  • Stay healthy (I’ve kept my doctor poor the last 2 years)
  • Tweet quality content (for once)
  • Learn more
  • Share more
  • Love more… wait. okay hold on. Not so fast.

I hope your 2020 is fantastic and filled with lots of fun, learning, and helping others along the way.