Devices, Scripting, Technology, windows

100-level 101

By my semi-quasi scientific reasoning, I estimate that this scenario has occurred in my presence approximately 34.75 times in the past 10 years. That number could be completely fictitious, but you have to prove me wrong, so good luck.

Anyhow, it happened yesterday, and today I had to actually apply it again myself, so it reminded me to blabber about it again, here, on my blabber blog. Remember, this is 100-level 101 stuff, so if you start rolling your eyes, I warned you already.

Challenge: You need to confirm a registry key is set on a remote client, RIGHT THIS FREAKING SECOND. The registry key is under one of the users who uses that machine. You only know the following:

  • The machine name
  • The user’s first and last name

Caveats: You are logged onto one of the domain controllers. You do not have Configuration Manager. You only have a keyboard, a mouse, a brain, a pair of eyeballs, and possibly a sleeping dog and angry cat nearby. Nothing else. Clothing is optional.


  • You ping the remote computer (e.g. “DT001”) and it responds with a happy wave and a smile.
  • You open trusty, old, bearded REGEDIT.exe and click File / Connect Network Registry. You enter the computer name (e.g. “DT001”). It tells you to **** off.
  • You apply some wax to your mustache and curl the ends neatly, crack your knuckles and continue. If you don’t have a mustache, use someone else’s for now.
  • Open a PowerShell console
  • Type: Get-Service RemoteRegistry -ComputerName DT001
  • It returns some information, including Status = “Stopped”
  • You attempt to start it: Get-Service RemoteRegistry -ComputerName DT001 | Start-Service. But it tells you to **** off.
  • You crack your knuckles once more and dawn a sinister look, like Daniel Day Lewis in There Will Be Blood
  • Set-Service RemoteRegistry -ComputerName DT001 -StartupType Manual
  • Get-Service RemoteRegistry -ComputerName DT001 | Start-Service
  • So far, so good. Go back to REGEDIT and connect successfully
  • You open HKEY_USERS and see a bunch of SID stuff, like “S-1-5-21-1234567890-0987654321-234234234234-1234, but you don’t know which one is related to the desired user account
  • Your dog reminds you that you are currently logged onto a domain controller.
  • You know the user is “Jimmy Jerkweed”, so you search for him using Get-ADUser -Filter ‘Name -like “*Jerkweed*”‘ | select *
  • You find one with a SID property that matches the registry key names and dive in

The Short Version

  • ping DT001
  • Set-Service RemoteRegistry -ComputerName DT001 -StartupType Manual
  • Get-Service RemoteRegistry -ComputerName DT001 | Start-Service
  • Regedit.exe / Connect Network Registry / DT001
  • Get-ADUser -Filter ‘Name -like “*jerkweed*”‘ | select SID

Way too many times, this would stop at the second bullet (above). The technician would insist that either a firewall, or anti-virus, were blocking access. Or maybe there was a problem with the machine. Not so.

  • By default, the Remote Registry service is disabled. Therefore, it cannot be forced to start, especially remotely.
  • Without this service running, you cannot connect to the registry from a different machine on the network, regardless of your privileges.
  • In most cases, by default, as a user with direct (or indirect) administrative rights on the remote machine, you can change the service startup type property from “disabled” to “manual”, allowing you to then start it, even remotely.
  • When using a Windows workstation, or member server (not a domain controller), you can also run the Get-ADxxxx cmdlets, if you have RSAT installed and enabled. If you don’t, and can’t, you can install the AdsiPS powershell module and do the same using Get-AdsiUser.


Projects, Scripting, Technology

Building Blocks: PowerShell module rollbacks

What is a “roll back” you ask? (I know you didn’t really ask, but for those that wanted to ask…) in general terms, it is rolling back to a previous version of some piece of software, in this case a PowerShell module. For example, going from module version 1.2 back to 1.1.

A customer asked me, “What’s the best way to roll back to a specific version of a PowerShell module?

I said, “As a consultant, the answer is ‘it depends’“, ha ha! Just kidding. Well, kind of kidding. Okay, not really kidding, but all kidding aside… The process usually follows this workflow (assuming this is a public module, which you do not own/maintain):

Rollback Scenarios

Reminder: Because this happens so often, it’s like struggling with a USB plug – – – Whenever you are working with installing, updating or removing PowerShell modules, open the PowerShell console using “Run as administrator”. Alternatively, you can manage them under your “user” scope alone.

For the following examples, I’m using the PowerShell module: dbatools. There is nothing wrong (as far as I’ve seen) with the latest version, but I’m going to roll it back to a previous version to demonstrate my incoherent blabbering.

Scenario A – Old Version Still Installed

If the PowerShell module was updated using Update-Module, there’s a good chance that the prior version(s) are still installed on the local system. To confirm, use Get-Module <modulename> -ListAvailable.

In this example, I have two (2) versions installed (1.0.15 and 1.0.20). I want to uninstall the newer version (1.0.20) and leave only 1.0.15 installed.

I would normally use Uninstall-Module <modulename> -RequiredVersion <bad-version> or in this example: Uninstall-Module dbatools -RequiredVersion 1.0.20, as shown below.

You may get an error saying another module is “dependent” upon the one you’re trying to remove (see example above). If so, make note of the dependent module, uninstall it, then try the first uninstall again. Once you have the version you want, you can reinstall the dependent module (assuming it’s not actually dependent on the version you just uninstalled, doh!!)

After all this fuss, it now shows dbatools version 1.0.15 installed.

Scenario B – PS Gallery

If only the newest version (the bad version) is installed, check to see if the prior version is still available on the PowerShell Gallery. You can do this using Find-Module <modulename> -AllVersions.

Warning: dbatools lists pretty much every version since inception, so the list is very long.

If the results show the version you want/need, simply uninstall the current module and install the specific version from the PS Gallery.

Tip: This method supports rolling back to as far back as the author maintains in the PS Gallery. If they chose to unlist a particular version that you need, this won’t work, and you’re on to scenario C below.

Scenario C – GitHub Repository

If the prior version you need is no longer available on the PowerShell Gallery, the next place to look is on the “Project site” or GitHub repository. In some cases, this isn’t possible, but thankfully, it’s more often available than not.

Go to the GitHub site, open the repository, confirm the version, and the branch, and click the Clone or Download button, then click Download Zip. Extract the ZIP file contents somewhere.

Keep in mind that the folder structure provided by the GitHub ZIP download is not the same as what PowerShell modules require in the default path environment. Use the following command to display the current module path…

(Get-Module <name> -ListAvailable).Path

Note the version number in the path string. You will need to “spoof” this to match the version you downloaded so the PowerShell environment will properly recognize it. For this example, just pretend it shows “…\1.0.20\…” and “…\1.0.15\…” doesn’t exist.

Navigate to the parent folder (e.g. the module name itself, “dbatools”), such as “c:\Program Files\WindowsPowerShell\Modules\dbatools”

Create a new sub-folder for the version you want (i.e. “1.0.15”)

Open the ZIP file, drill-down under the first root-level folder, to see the main files and folders. Extract the contents from there into that new module path folder on your hard drive.

IMPORTANT: This extract/copy process will place more than is really needed, but it’s okay. PowerShell will only load what it needs and ignore what it doesn’t need.

If there is not GitHub (or other) repository available, or the version is no longer available for some reason, you’re on to scenario D below.

Scenario D – F**k it

That’s right, just F**K it. Yell out obscenities, and claim you have Tourette syndrome. After you calm down, search for alternative sources:

  • Other systems which still have the older module version installed (copy the folders/files)
  • System or file backups which you could pilfer to get the older module files back. Use the $env:PATH variable to guide you towards the folder and file location(s).
  • Call a friend who might have an older version installed somewhere, and threaten them with fresh doughnuts or cold beer, until they give in.

If that doesn’t work, go to a gym and beat up a punching bag for an hour.


As it turned out, they’d built a PowerShell-based automation process using internal scripts, and modules available on the PowerShell Gallery. Nothing unusual about that; it is what it was intended for. However, they had also built-in an automatic “update all modules” task at the beginning of their script.

This is a major no-no, because it violates basic “change control” rules. Every change (emphasis on “every“) should (read: must) be tested prior to applying in a production environment. Making the update process part of the production workflow automatically breaks that rule. And in their case, the module they were using was updated to deprecate a parameter on a particular function, which crashed their particular process.

Be careful not to confuse what I’m saying with automated CI/CD pipelines (dev > test > prod). This is merging external changes into a production environment; skipping dev and test entirely. In a nutshell, if you follow standard change control practices, you should rarely, if ever, encounter this situation.

Long story short (like I’m any good at short stories), they couldn’t locate a local copy of the older version and didn’t have a suitable backup to search, but the older version of the module was available in PS Gallery, so they went with scenario B.

Then the angry pack of wolves climbed in through the bedroom window in the middle of the night and ate every single one of them. Oh wait, wrong story…

And they lived happily ever after. The end.

Cloud, Scripting, Technology

Building Blocks: GitHub Issues via PowerShell

The PowerShell module “PowerShellForGitHub” contains a powerful collection of functions to let you interact with, and manage, your GitHub goodies. (Note: read the Configuration section carefully before using). I won’t repeat the installation and configuration part since they already took care of that just fine.

After playing around with it, I found one useful way to leverage this is to query the open issues for my repos, and feed selected information to other things like e-mail, Teams, and so forth. Since it’s just providing a pipeline of information, you can send it off anywhere your mind can imagine.

#requires -modules PowerShellForGitHub
function Get-GitHubRepoIssues {
  param (
    [parameter(Mandatory=$True, HelpMessage="The name of your repository")]
    [string] $RepoName,
    [parameter(Mandatory=$False, HelpMessage="GitHub site base URL")]
    [string] $BaseUrl = ""
  try {
    $issues = Get-GitHubIssue -Uri "$BaseUrl/$RepoName" -NoStatus |
      Where-Object {$_.state -eq 'open'} | 
        Sort-Object Id |
          Select Id,Title,State,Labels,Milestone,html_url
    $issues | % {         
      $labels = $null         
      if (![string]::IsNullOrEmpty($ {
        $labels = $ -join ';'
        ID     = $_.Id
        Title  = $_.Title
        State  = $_.state
        Labels = $Labels
        Milestone = $_.milestone.title
        URL    = $_.html_url
  catch {
    Write-Error $Error[0].Exception.Message

Sample output…

So, if you have a GitHub account with active repositories and issues, you might be able to glue some cool things together using PowerShell. If you have a cool example, share it in the comments below and I’ll be happy to share it on Twitter as well.


databases, Scripting, System Center, Technology

What Not to Do With ConfigMgr, 1.0.1

[note: this post has been sitting in my drafts folder for over a year, but recent events reminded me to dust it off and post it]

One of my colleagues, the infamous @chadstech, sent a link to our team, to the slide deck from the Channel9 session (MS04) “30 Things you should never do with System Center Configuration Manager” by @orinthomas and @maccaoz. If you haven’t seen (or read) it already, I strongly recommend doing so first.

It’s from 2016, so even though it’s a few years old now, it still holds up very well in mid 2019. However, everyone who’s ever worked with that product knows that the list could become a Netflix series.

This blog post is not going to repeat the above; instead, append the list with some things I still see in a variety of environments today. Things which really should be nipped in the bud, so to speak. Baby steps.

Using a Site Server like a Desktop

Don’t do it. Install the console on your crappy little desktop or laptop and use that. Leave your poor server alone. Avoid logging into servers (in general) unless you REALLY need to perform local tasks, and that’s it. Anything you CAN do remotely, should be done remotely.

If installing/maintaining the ConfigMgr console is your concern: forget that. The days of having to build and deploy console packages are gone. Install it once, and let it update itself when new versions are available. Kind of like Notepad++. Nice and easy.

Why? Because…

  • Using a server as a daily desktop workspace not only drags on resources and performance.
  • It creates a greater security and stability risk to the environment.
  • The more casual you are with your servers, the sloppier you’ll get and eventually you’ll do something you’ll regret

Whatever your excuse has been thus far, stop it.

Anti-Virus Over-Protection

Even in 2019, with so many tools floating about like Symantec, McAfee, Sophos, CrowdStrike, and so on, when I ask if the “exclusions” are configured to support Configuration Manager, I often get a confused look or an embarrassing chuckle. Gah!!! Chalkboard scratch!

There are several lists of things to exclude from “real-time” or “on-demand” scanning, like this one, and this one. Pick one. Failing to do this VERY often leads to breaks in processes like application deployments, software updates deployments, and policy updates.

Also important: with each new release of Configuration Manager, read the release notes and look for new folders, log files, services or processes that may be introduced. Be sure to adjust your exclusions to suit.

Ignoring Group Policy Conflicts

Whatever you’re doing with regards to GPO settings, make damned sure you’re not also doing the same things with Configuration Manager. The two “can” be combined (in rare cases) to address a configuration control requirement, and you can sew two heads on a cow, but that doesn’t mean it’s the best approach.

Pick one, or the other, only. If you have WSUS settings deployed by GPO, and are getting ready to roll out Software Updates Management via Configuration Manager, stop and carefully review what the GPO’s are doing and make adjustments to remove any possible conflicts.

And, for the sake of caffeine: DOCUMENT your settings wherever they live. GPO’s, CI’s or CB’s in ConfigMgr, scheduled tasks, whatever. DOCUMENT THEM! Use the “Comments” or “Description” fields to your advantage. They can be mined and analyzed easily (take a look at PowerShell module GPODOC for example / shameless plug).

One-Site-Fits-All Deployments

I’ve seen places that only use packages, or only use Task Sequences, or only use script wrapping, or only repackage with AdminStudio (or some alternative). That’s like doing every repair job in your house or apartment with a crowbar.

There’s nothing wrong with ANY means of deploying software as long as it’s the most efficient and reliable option for the situation. Just don’t knee-jerk into using one hammer for every nail, screw, and bolt you come across.

Pick the right tool or method for each situation/application. Doing everything “only” one way is ridiculously inefficient and time-wasting.

Sharing SQL Instances

The SQL licensing that comes with a System Center license does not permit hosting third-party products. Not even your own in-house projects, technically speaking. You “can” do it, but you’re not supposed to.

What that means is, when you run into a problem with the SQL Server side of things, and you call Microsoft, and they look at it and see you have added a bunch of unsupported things to it, you’ll likely get the polite scripted response, “Thank you for being a customer. You appear to be running in an unsupported configuration. Unfortunately, we can’t provide assistance unless you are running in a supported configuration. Please address this first and re-open your case, if needed, for us to help? Thank you. Have a nice day. Bye bye now.

And, now, you’re facing an extended duration of what could have been a simple problem (or no problem at all, since your third-party app might be the problem).

Configuration Manager is extremely demanding of it’s SQL resources. Careful tuning and maintenance is VERY VERY VERY often the difference between a smooth-running site, and an absolute piece of shit site. I can’t stress that enough.

Leeching SQL Resources

Some 3rd party products, who I’m advised not to name for various legal reasons, provide “connection” services into the Configuration Manager database (or SMS provider). Attaching things to any system incurs a performance cost.

Before you consider installing a “trial” copy of one of those in your production environment, do it in a test environment first. Benchmark your environment before installing it, then again after. Pay particularly close attention to what controls that product provides over connection tuning (polling frequency, types of batch operations, etc.).

And, for God’s sake (if you’re an atheist, just replace that with whatever cheeseburger or vegan deity you prefer), if you did install some connected product, do some diagnostic checking to see what it’s really doing under the hood.

And just as important: if you let go of the trial (or didn’t renew a purchased license) – UNINSTALL that product and make sure it’s sticky little tentacles are also removed.

Ignoring Backups

Make sure backups are configured and working properly. If you haven’t done a site restore/recovery before, or it’s been a while, try it out in an isolated test environment. Make sure you understand how it works, and how it behaves (duration, results, options, etc. )

Ignoring the Logs

Every single time I get a question from a customer or colleague about some “problem” or “issue” with anything ConfigMgr (or Windows/Office) related, I usually ask “what do the logs show?” I’d say, on average, that around 80% of the time, I get silence or “hold on, I’ll check”.

If you ask me for help with any Microsoft product or technology, the first thing I will do is ask questions. The second thing I will do is look at the appropriate logs (or the Windows Event Logs).

So, when the log says “unable to connect to <insert URL here>” and I read that, and try to connect to same URL and can’t, I will say “Looks like the site isn’t responding. Here’s my invoice for $40,000 and an Amazon gift card”. And then you say “but I could’ve done that for free?!” I will just smile, and hold out my greedy little hand.

Keep in mind that the server and client logs may change with new releases. New features often add new log files to look at.

Check the logs first.

Ignoring AD: Cleanups

Managers: “How accurate is Configuration Manager?”

Answer: “How clean is your environment?”

Managers: (confused look)

If you don’t have a process in place to insure your environment is maintained to remove invalid objects and data, any system that depends on that will also be inaccurate. It’s just a basic law of nature.

Step 1 – Clean up Active Directory. Remove accounts that no longer exist. Move unconfirmed accounts to a designated OU until verified or removed. This process is EASY to automate, by the way.

Step 2 – Adjust ConfigMgr discovery method settings to suit your environment. Don’t poll for changes every hour if things really only change monthly. And don’t poll once a month if things really changes weekly. You get the idea. Just don’t be stupid. Drink more coffee and think it through.

Step 3 – I don’t have a step 3, but the fact that you actually read to this point brings a tear to my eyes. Thank you!

Ignoring AD: Structural Changes

But wait – there’s more! Don’t forget to pay attention to these sneaky little turds:

  • Additions and changes to subnets, but forgetting to update Sites and Services
  • Changes to domain controllers, but not updating DNS, Sites and Services or DHCP
  • Changes to OUs, but forgetting to update GPO links
  • All the above + forgetting to adjust ConfigMgr discovery methods to suit.

Ignoring DNS and DHCP

It’s never DNS!“, is really not that funny, because it’s very often DNS. Or the refusal to admit there might be a problem with DNS. For whatever reason, many admins treat DNS like it’s their child. If you suggest there might be something wrong with it, it’s like a teacher suggesting their child might be a brat, or stupid, or worse: a politician. The other source of weirdness is DHCP and its interaction with DNS.

Take some time to review your environment and see if you should make adjustments to DHCP lease durations, DNS scavenging, and so on. Sometimes a little tweak here and there (with CAREFUL planning) can clean things up and remove a lot of client issues as well.

Check DHCP lease settings and DNS scavenging to make sure they are closely aligned to how often clients move around the environment (physically). This is especially relevant with multi-building campus environments with wi-fi and roaming devices.

Task Sequence Repetition

A few releases back, Microsoft added child Task Sequence features to ConfigMgr. If you’re unaware of this, read on.

Basically, you can insert steps which call other Task Sequences. In Orchestrator or Azure Automation parlance this is very much like Runbooks calling other Runbooks. Why is this important? Because it allows you to refactor your task sequences to make things simpler and easier to manage.

How so?

Let’s say you have a dozen Task Sequences, and many (or all) of them contain identical steps, like bundles of applications, configuration tasks, or driver installations. And each time something needs updating, like a new application version, or a new device driver, you have to edit each Task Sequence where you “recall” it being used. Eventually, you’ll miss one.

That’s how 737 Max planes fall out of the sky.

At the very least, it’s time wasted which could be better spent on other things, like drinking, gambling and shooting guns at things.

Create a new Task Sequence for each redundant step (or group of steps) used in other Task Sequences. Then replace those chunks of goo with a link to the new “child” Task Sequence. Now you can easily update things in one place and be done with it. Easy. Efficient.

Ignoring Staffing

Last, but certainly not least is staffing. Typically, this refers to not having enough of it. In a few cases, it’s too many. If your organization expects you to cover Configuration Manager, and it’s SQL Server aspects, along with clients, deployments, imaging, updates, and configuration policies, AND maintain other systems or processes, it’s time for some discussion, or a new job.

If you are an IT manager, and allow your organization to end up with one person being critical to a critical business operation, that’s foolish. You are one drunk driver away from a massive problem.

An over-burdened employee won’t have time to create or maintain accurate documentation, so forget the crazy idea of finding a quick replacement and zero downtime.

In team situations, it’s important to encourage everyone to do their own learning, rather than depend on the lead “guru” all the time. This is another single point of failure situation you can avoid.

If there’s anyone who knows every single feature, process and quirk within Configuration Manager, I haven’t met them yet. I’ve been on calls with PFE’s and senior support folks and heard them say “Oh, I didn’t know that” at times. It doesn’t make sense to expect all of your knowledge to flow out of one person. Twitter, blogs, user groups, books, video tutorials, and more can help you gain a huge amount of awareness of features and best practices.

That’s all for now. Happy configuring! 🙂

Cloud, Scripting

Microsoft Teams and PowerShell

I just started playing around with the MicrosoftTeams PowerShell module (available in the PowerShell Gallery, use Find-Module MicrosoftTeams for more information). Here’s a quick sample of how you can get started using it…

$conn = Connect-MicrosoftTeams

# list all Teams

# get a specific Team
$team = Get-Team -DisplayName "Benefits"

# create a new Team
$team = New-Team -DisplayName "TechSupport" -Description "Technical Support" -Owner ""

# add a few channels to the new Team
New-TeamChannel -GroupId $team.GroupId -DisplayName "Forms Library" -Description "Forms and Templates"
New-TeamChannel -GroupId $team.GroupId -DisplayName "Customers" -Description "Information for customers"
New-TeamChannel -GroupId $team.GroupId -DisplayName "Development" -Description "Applications and DevOps teams"

# dump properties for one Team channel
$channelId = Get-TeamChannel -GroupId $team.GroupId |
Where-Object {$_.DisplayName -eq 'Development'} |
Select-Object -ExpandProperty Id

# add a user to a Team
Add-TeamUser -GroupId $team.GroupId -User "" -Role Member

Here’s a splatted form of the above example, in case it renders better on some displays…

$conn = Connect-MicrosoftTeams

# list all Teams

# get a specific Team
$team = Get-Team -DisplayName "Benefits"

# create a new Team
$params = @{
DisplayName = "TechSupport"
Description = "Technical Support"
Owner = ""
$team = New-Team @params

# add a few channels to the new Team
# NOTE: You could form an array to iterate more efficiently
$params = @{
GroupId = $team.GroupId
DisplayName = "Forms Library"
Description = "Forms and Templates"
New-TeamChannel @params

$params = @{
GroupId = $team.GroupId
DisplayName = "Customers"
Description = "Information for customers"
New-TeamChannel @params

$params = @{
GroupId = $team.GroupId
DisplayName = "Development"
Description = "Applications and DevOps teams"
New-TeamChannel @params

# dump properties for one Team channel
$channelId = Get-TeamChannel -GroupId $team.GroupId |
Where-Object {$_.DisplayName -eq 'Development'} |
Select-Object -ExpandProperty Id

# add a user to a Team
$params = @{
GroupId = $team.GroupId
User = ""
Role = 'Member'
Add-TeamUser @params

Scripting, System Center, Technology

Captain’s Log: cmhealthcheck

I’ve consumed way way waaaaay too much coffee and tea today. Great for getting things done, not great for my future health.

CMHealthCheck 1.0.8 is in the midst of being waterboarded, kicked, beaten, tasered and pepper-sprayed to make it squeal. I’m close to a final release. Among the changes in testing:

  • Discovery Methods
  • Boundary Groups
  • Site Boundaries
  • Packages, Applications, Task Sequences (just summary), Boot Images (summary), etc.
  • User and Device Collections
  • SQL Memory allocation (max/pct)
  • Fixed “Local Groups” bug
  • Fixed “Local Users” bug
  • Enhanced Logical Disks report
  • Fixed “Installed Software” sorting issue
  • Fixed “Services” sorting issue
  • Fixed null-reference issues with “Installed Hotfixes”

Still in the works:

  • Sorting issue with ConfigMgr Roles installation table
  • Local Group Members listing
  • More details for Discovery Methods
  • Client Settings
  • ADR’s
  • Deployment Summary
  • Enhancements to the HTML reporting features

Stay tuned for more.

Note: The current posted version (as of 3/8/19) is 1.0.7, which is what will install if you use Install-Module.

To load the 1.0.8 test branch, go to the GitHub repo, change the branch drop-down from “master” to 1.0.8 (or whatever the other name happens to be at the time) and then use the Download option to get the .ZIP file. Then extract to a folder, and use Import-Module to import the .psd1 file and start playing.

Projects, Scripting, System Center, windows



UPDATE: 1/14/2019 – version 1901.13.2 was posted to address a problem with the previous upload.  Apparently, I posted an out-of-date build initially, so I’ll call this the “had another cup of coffee build”.

Dove-tailing from the previous idiotic blog post, I’ve taken some time off to retool, rethink, redesign and regurgitate “skattertools” as a single PowerShell module.  The new version blends PoSHServer into the module and removes the need to perform a separate install for the local web listener.  The first version of this is 1901.13.1 (as in 2019, 01=January 13th, 1st release).

How to Install and Configure sktools

  • Open a PowerShell console using Run as Administrator
  • Type: Install-Module sktools
  • Type: Import-Module sktools
  • Type: Install-SkatterTools (this creates a default “sktools.txt” configuration file in your “Documents” folder)
  • Type: Start-SkatterTools
  • Open your browser and navigate to http://localhost:8080

This next part is only temporary, and will be improved upon soon:

  • Once the web console is open, expand “Support” and click “Settings” and modify to suit your Configuration Manager site environment.
  • Close and reopen the PowerShell console (still “Run as Administrator”)
  • Type: Start-SkatterTools
  • Refresh your web browser session

Work will continue until morale is eliminated.  Easter eggs are included, sort of.  Thoughts, feedback, bug reports, enhancement requests, angry snarky comments, are all welcome.  Enjoy!