Cloud, Scripting, Technology

Building Blocks: GitHub Issues via PowerShell

The PowerShell module “PowerShellForGitHub” contains a powerful collection of functions to let you interact with, and manage, your GitHub goodies. (Note: read the Configuration section carefully before using). I won’t repeat the installation and configuration part since they already took care of that just fine.

After playing around with it, I found one useful way to leverage this is to query the open issues for my repos, and feed selected information to other things like e-mail, Teams, and so forth. Since it’s just providing a pipeline of information, you can send it off anywhere your mind can imagine.

#requires -modules PowerShellForGitHub
function Get-GitHubRepoIssues {
  [CmdletBinding()]
  param (
    [parameter(Mandatory=$True, HelpMessage="The name of your repository")]
    [ValidateNotNullOrEmpty()]
    [string] $RepoName,
    [parameter(Mandatory=$False, HelpMessage="GitHub site base URL")]
    [ValidateNotNullOrEmpty()]
    [string] $BaseUrl = "https://github.com/skatterbrainz"
  )
  try {
    $issues = Get-GitHubIssue -Uri "$BaseUrl/$RepoName" -NoStatus |
      Where-Object {$_.state -eq 'open'} | 
        Sort-Object Id |
          Select Id,Title,State,Labels,Milestone,html_url
    $issues | % {         
      $labels = $null         
      if (![string]::IsNullOrEmpty($_.Labels.name)) {
        $labels = $_.Labels.name -join ';'
      }
      [pscustomobject]@{
        ID     = $_.Id
        Title  = $_.Title
        State  = $_.state
        Labels = $Labels
        Milestone = $_.milestone.title
        URL    = $_.html_url
      }
    }
  }
  catch {
    Write-Error $Error[0].Exception.Message
  }
}

Sample output…

So, if you have a GitHub account with active repositories and issues, you might be able to glue some cool things together using PowerShell. If you have a cool example, share it in the comments below and I’ll be happy to share it on Twitter as well.

Cheers!

Advertisements
Cloud, Projects, Scripting, Technology

Part 2 – Copy Azure Blob Containers between Storage Accounts using Azure Automation with Fries and a Drink

So, my previous post was about using PowerShell from a compute host (physical/virtual computer) to connect to an Azure subscription and copy containers between storage accounts.  I call that “part 1”.  This will be “part 2”, as it takes that smelly pile of compost and shovels into Azure Automation.

In short, the basic changes from the previous example:

  • Not nearly as much fuss with credentials within the PowerShell code
  • Configuration settings are stored in Azure as Variables, rather than a .json file
  • Less code!

The previous article refers to the diagram on the left.  This one refers to the one not on the left.

aa0

Assumptions

  • You have access to an Azure subscription
  • In Azure, you have at least one (1) Resource Group, having two (2) Storage Accounts (one for “source” and the other for “destination”.  The “backup” this performs is copying from “source” to “destination”)
  • You somehow believe I know what I’m talking about
  • You stopped laughing and thought “Shit. Maybe this idiot doesn’t know what he’s talking about?
  • After a few more minutes you thought “Why am I reading what I’m actually thinking right now?  How does he know what I’m thinking?  It’s like he’s an idiot savant!  Maybe he counts toothpicks on the floor while brushing his teeth…
  • You consider that this was written in November 2018, and Azure could have changed by the time you’re reading this.

Basic Outline

The basic goals of this ridiculous exercise in futility are (still):

  • Copy all (or selected) containers from Storage Account 1 to Storage Account 2 using an Azure Automation “runbook”, once per day.
  • The copy process will append “yyMMdd” datestamps to each container copied to Storage Account 2
  • The copy process will place the destination containers under a container named “backups”.  For example, “SA1/container1” will be copied to “SA2/backups/container1-181117”
  • Both storage accounts should be within the same Azure Resource Group, and in the same Region
  • New Goal: Eliminate a dedicated host machine for running the script in lieu of an Azure Automation Runbook.

Important!

This is a demo exercise only.  DO NOT perform this on a production Azure tenant without testing that absolute living shit out of it until your fingers are sore, your eyes are bloodshot and you’ve emptied every liquor bottle and tube of model glue in your house/apartment.

The author assumes NO responsibility or liability for any incidental, accidental, intentional or alleged bad shit that happens resulting from the direct or indirect use of this example.  Batteries and model glue not included.

Preparation

First, we need to set up the automation account and some associated goodies.  Some of the steps below can be performed using Azure Storage Explorer, or PowerShell, but I’m using the Azure portal (web interface) for this exercise.  Then we’ll create the Runbook and configure it, and run a test.

  1. From the Azure portal, click “All Services” and type “Automation” in the search box.
  2. Click the little star icon next to it.  This adds it to your sidebar menu (along the left)
  3. Click on “Automation Accounts
  4. Click “Add” near the top-left, fill in the Name, select the Resource Group, Location and click Create
  5. From the Automation Accounts blade (I hate the term “blade”, in fact I hate the Azure UI in general, but that’s for another paint-fume-sniffing article), click on the new Automation Account.

Credentials and Variables

  1. Scroll down the center menu panel under “Shared Resources” and click on “Credentials“, and then click “Add a credential” at the top. Fill in the information and click Create.  This needs to be an account which has access to both of the storage accounts, so you can enter your credentials here if you like, since this is only a demo exercise.
  2. Go back to “Automation Accounts” (bread crumb menu along top is quickest)
  3. Go back to the Automation Account again, and scroll down to “Variables
  4. Add the variables as shown in the example below.  All of the variables for this exercise are “String” type and Encrypted = “No”.  This part is a bit tedious, so you should consume all of your elicit substances before doing this step.

The Runbook

  1. Go back to the Automation Account again and click on “Runbooks
  2. Click “Add a runbook” from the menu at top, then click “Quick Create / Create a new runbook” from the middle menu pane.
  3. Enter a name and select “PowerShell” from the Runbook type list.  Enter a Description if you like, and click Create.

When the new Runbook is created, it will (should) open the Runbook editor view.  This will have “> Edit PowerShell Runbook” in the heading, with CMDLETS, RUNBOOKS, and ASSETS along the left, and line 1 of the editor form in the top-middle.

  1. Copy/Paste the code from here into the empty space next to line 1 in the code editor.
  2. Make sure the variable names at lines 10-16 match up with those you entered at step 9 above.  If not: for each variable that needs to be corrected: delete the code to the right of the equals sign (“= Get-AutomationVariable -Name …”), place the cursor after the “=” and click Assets > Variables > then click the “…” next to the variable you want, and select “Add “Get Variable” to canvas“. (see example below)
  3. After entering the code and confirming the variable assignments, click Save.  Don’t forget to click Save!

Testing

  1. Click “Test pane” to open the test pane (I’m shocked they didn’t call it the “test blade”) – Tip: If you don’t see “Test pane” go back to the Runbook editor, it’s at the top (select the Runbook, click Edit).
  2. Click “Start” and wait for the execution to finish.  (Note: Unlike running PowerShell on a hosted computer, Azure Automation doesn’t show the output until the entire script is finished running)

Code Note: You may notice that line 97 ($copyJob = Start-AzureStorageBlobCopy…) is commented out.  This is intentional so as to mitigate the chances of you accidentally copying an insane amount of garbage and running your Azure bill into the millions of dollars.

Testing Note: Since line 97 is commented out, the test should simply show what was found, but no copies are actually processed.  In the last image example (below) you will still see “copy completed” for each container set, but that’s just more glue-sniffing imaginary hallucination stuff for now.  Once you remove the comment, that becomes very real.  As real as Kentucky Fried Movie 3D punching scenes.

When you’ve tested this to your satisfaction, simply uncomment that line (or better yet, add $WhatIfPreference = $True at the top of the script, just below the $VerbosePreference line)

aa1

aa2

(there was a sale on red arrows and I couldn’t say no)

aa3a

aa3b

aa3

aa4

aa5

aa6

aa7.png

 

 

 

Scripting, System Center, Technology, windows

A Cheap Extensible PowerShell Pipeline for ConfigMgr Queryburgers and a Side of Fries

I’ve been knocked out all day on cold medicine and just woke up.  So, to be honest, I have no idea what year it is.  In fact, Configuration Manager and SQL Server might be long gone.  Microsoft may have been acquired by Walmart, and Kanye West is POTUS.  Who knows.  Anyhow…

Many of the ConfigMgr projects I have worked on over the last few years, I find customers trying to build processes off of information pulled from Configuration Manager.  Most often it’s something like:

  • Execute X on all machines which have Y installed
  • Notify <GROUP> for all machines which have condition Z = True

…and so on.  And no, “X”, “Y” and “Z” are not real things, just variables to replace with real things.  Kind of like how politicians are variables that get replaced with money.

In many cases, this is done in a silo.  Meaning – it’s built as a standalone script.  And then another is built separately for a different purpose, and so on.  But, in many cases, there’s an overlap in the area where data is pulled from Configuration Manager on which to base the scope of the operation or process.  Rather than “hard code” this part, I’ve been using a somewhat “open” approach that returns data from a query and passes it on via the PowerShell pipeline.  This makes it fit nicely into a tool model (credit to Don Jones), and thereby: reusable.

Some common scenarios this needs to adapt to:

  • No guarantee that the ConfigMgr admin console is installed where the script is executed, and therefore, no guarantee of a local .psd1 module to load.
  • No guarantee of SCCM admin rights, via the WMI/SMS provider channel, but….
  • Having SQL database read access (as a minimum)
  • At least PowerShell 3.0 (prefer 5.x or later)
  • Doesn’t matter how it’s invoked (Task Scheduler, SQL Job, Azure Automation, Jenkins, some kid on a bicycle, a Bird scooter, etc.)

I prefer they follow Microsoft guidelines with regards to SQL using Windows authentication for two (2) reasons:  First, it’s compliant with Microsoft recommendations, and Second: It complies with Microsoft guidelines with regards to SQL using Windows authentication for Configuration Manager.

The moving parts consist of:

  • A (PowerShell) script
  • One or more SQL query files (**)
  • An AD user account with read access to the CM_XXX database
  • Coffee (Wine will do)

The general process:

  1. Something kicks it off (manual invocation, scheduled job, event trigger, etc.)
  2. Script imports the desired query from file (**)
  3. Script executes query against CM_XXX database
  4. Results (dataset) returned to script
  5. Results output to PowerShell (pipeline)

(** doesn’t matter how you prefer to store the SQL statement content. I chose files because they’re the simplest and most portable form, and they’re easy to build and export from SSMS)

For those who prefer a visual representation…

Yeah, I know, 5 and 6 could be one thing but whatever.  And coffee is applied between steps 1 and 5.  Okay, so what does this look like?

#requires -version 3.0
<#
.DESCRIPTION
This is for sample purposes only.  Actual horrific mess is posted on GitHub here.
Name: Get-CMSqlQueryData.ps1
Real, 100% gluten-free documentation headings are provided in the actual script on GitHub
#>

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="SQL Server ADO Connection Object")]
    $AdoConnection,
  [parameter(Mandatory=$True, HelpMessage="SQL Query Statement")]
    [ValidateNotNullOrEmpty()]
    [string] $Query,
  [parameter(Mandatory=$False, HelpMessage="ConfigMgr SQL Server Host Name")]
    [string] $SQLServerName,
  [parameter(Mandatory=$True, HelpMessage="ConfigMgr Site Code")]
    [ValidateNotNullOrEmpty()]
    [string] $SiteCode
)
$DatabaseName = "CM_$SiteCode"
if (!$AdoConnection) {
  Write-Verbose "opening new connection"
  $AdoConnection = .\Get-CMAdoConnection.ps1 -SQLServerName $SQLServerName -DatabaseName $DatabaseName
  if (!$AdoConnection) {
    Write-Warning "failed to open SQL connection!"
    break
  }
}
$cmd = New-Object System.Data.SqlClient.SqlCommand($Query,$AdoConnection)
$cmd.CommandTimeout = $QueryTimeout
$ds = New-Object System.Data.DataSet
$da = New-Object System.Data.SqlClient.SqlDataAdapter($cmd)
[void]$da.Fill($ds)
if ($IsOpen) { 
  Write-Verbose "closing connection"
  $AdoConnection.Close() 
}
$rows = $($ds.Tables).Rows.Count
Write-Output $($ds.Tables).Rows

The trainwreck above is available on my GitHub trainwreck site here.  The Get-CMAdoConnection.ps1 script referenced above, is also available on my tragic GitHub site here.

A sample query (cm-all-systems.sql):

SELECT DISTINCT 
  v_R_System.Name0 AS ComputerName, 
  v_R_System.ResourceID, 
  v_R_System.AD_Site_Name0 AS ADSite, 
  vWorkstationStatus.ClientVersion, 
  vWorkstationStatus.UserName, 
  vWorkstationStatus.LastHardwareScan, 
  v_GS_COMPUTER_SYSTEM.Model0 AS Model, 
  v_GS_OPERATING_SYSTEM.Caption0 AS OSName, 
  v_GS_OPERATING_SYSTEM.BuildNumber0 AS OSBuild, 
  v_GS_OPERATING_SYSTEM.OSArchitecture0 AS OSArch
FROM 
  v_R_System 
  LEFT OUTER JOIN
    v_GS_OPERATING_SYSTEM ON 
    v_R_System.ResourceID = v_GS_OPERATING_SYSTEM.ResourceID 
  LEFT OUTER JOIN
    v_GS_COMPUTER_SYSTEM ON 
    v_R_System.ResourceID = v_GS_COMPUTER_SYSTEM.ResourceID 
  RIGHT OUTER JOIN
    vWorkstationStatus ON 
    v_R_System.ResourceID = vWorkstationStatus.ResourceID
ORDER BY 
  ComputerName

The reason for the optional -AdoConnection parameter is that it allows some control and flexibility around how/when connections are opened against the SQL database.  When running a batch of queries, it’s typically best to open one connection, execute the (multiple) queries, and close the connection at the end, rather than opening an explicit connection for each query.  However, if you only need to run a single query, I didn’t want the user (you) to have to think about an explicit connection (and subsequent connection-close) around the process, so it’s implicit.  See how considerate I can be? Like omg.

That said, let’s see how this looks in action.

For this example, I will assume there’s another script which will be invoked with the results of a query against ConfigMgr (e.g. “Do-Something.ps1”).  In this case, I want to isolate all ConfigMgr devices which are found to be in the Active Directory site named “Seattle”, and send those to a script to do something with their names, hence the genius name: Do-Something.ps1.

Example 1 – Single query

$query = Get-Content -Path "x:\stuff\queries\cm-all-systems.sql"
$result = .\Get-CMSqlQueryData.ps1 -SQLServerName "cm01.contoso.com" -SiteCode "P01" -Query $query |
  Where-Object {$_.ADSite -eq 'Seattle'} | Sort-Object ComputerName | 
    Select-Object -ExpandProperty ComputerName
if ($result.Count -gt 0) { .\Do-Something.ps1 -ComputerName $result }

In this example, I don’t use the -AdoConnection parameter, so the Get-CMSqlQueryData.ps1 script explicitly opens a new connection by calling out to Get-CMAdoConnection.ps1, and then closes the connection at the end.  The last line simply checks if any rows were returned and then passes them to the Do-Something.ps1 script.

Example 2 – Batch queries

$SqlHost = "cm01.contoso.com"
$SiteCode = "P01"
$DBname = "CM_$SiteCode"
$ReportPath = "y:\reports"
$queryFiles = Get-ChildItem -path "x:\stuff\queries" -Filter "*.sql"

if ($queryFiles.Count -gt 0) {
  # open a database connection
  $conn = .\Get-AdoConnection -SQLServerName $SqlHost -DatabaseName $DBName
  # iterate the query files and run each query in a loop
  foreach ($qfile in $queryFiles) {
    # import the query statement and define the output .CSV file name
    $query = Get-Content -Path $($qfile.FullName)
    $csvFile = Join-Path -Path $ReportPath -ChildPath "$($qfile.BaseName).csv"
    # run the query and dump it into the .CSV file
    .\Get-CMSqlQueryData.ps1 -AdoConnection $conn -SQLServerName $ServerName -SiteCode $SiteCode -Query $query | 
      Export-Csv -Path $csvFile -NoTypeInformation
  }
  # close the database connection
  $conn.Close()
}
Write-Host "like, omg! I can't believe I just did all that amazing stuff.  And it must have been amazing because YOU did it!" -ForegroundColor Green

As you can see, the second example gets all of the query files in a given folder path, then opens a SQL connection and iterates the queries and outputs each to its own .CSV file, and then closes the connection.  You can also pass in an explicit list of query filenames, rather than churning through an entire folder.

You could (and probably should) wrap the internals of the foreach() block inside of a try/catch/finally envelop, to insure $conn.Close() gets called if one of the iterations chokes to death on an egg roll or something.  But hopefully this is easy to understand.

Summary

So, this let’s me get data from Configuration Manager, from any computer on the network which has PowerShell 3.0 or later, whether or not it has the ConfigMgr admin console installed, and I can post-process the results however I want.  In addition, I don’t have to make any PowerShell code changes in order to add new queries to the library.  I also do not use an explicit username and password, since my SQL Server instance is configured for Windows authentication only.

Thank you for reading!  Please post comments or questions?  Let me know someone is still reading this stuff.  If you read to this point and you’re the first to tweet me the phrase “a correction to a bug in my code example”, you MIGHT win an Amazon gift card.  Just sayin. 🙂

business, databases, Devices, Scripting, System Center, Technology

Asset Inventory, It’s not just for breakfast anymore

For those of you that read my blog this will probably sound familiar.  But for those not yet dunked in the stupid tank of my pontification gyrations, I hope you find this post useful in some way.  Maybe print it out and use it for a toilet bombing target.

Last night I was grilling some sort of roadkill and having a beer and my thumbs went out of control on Twitter.  It was a spur of the moment reflection on how this topic seems to repeat over and over.  For some reason, I assume “asset inventory” is important enough for most organizations to make it a priority.  But, more often than not, it seems not to be the case.

Why Asset Inventory Sucks

I explained why it sucks back in 2014 on my old blog, here.  It still sucks, because humans suck at keeping track of things.  However, recently I received a few requests to digress into this a bit more on the recommendation side, which is what this post is aimed at doing.  The biggest and most important piece is process. In fact, more than one process, but at least start there.

I must emphasize here the following:

There is no such thing as “perfect asset inventory”.  Whether you’re Wal-Mart or the US Department of Defense, shit gets lost.  And somewhere, somehow, that piece of shit has a record sitting in some shitty place that still says that shit is real shit and it exists somewhere.  But, if you try to put your hands on that shit, you find you’re shit out of luck.  But the goal should always be to get as close to “perfect” as you can, without inflicting harm on your business, your employees, or your customers.

Side Note: If you get bored, Google “US military missing inventory” and pull up a chair.  You’ll be reading for awhile.

Nuts and Bolts

When you look at how a device can be tracked throughout its lifetime, it’s actually not that different from how humans are tracked.

For either column, each row relates to a distinct system which maintains relevant information for that category.  And for either humans or devices, it’s not uncommon that each of those systems belongs to a different department, and they end up building silos of information.  It’s also not uncommon that each silo maintains redundant, and often inconsistent, information about the same asset/person.  Many of these systems have been developed independently for years before anyone thought to link them for various business needs.

For humans, there’s the hospital, the IRS, SSA, DMV, DHS, DOD, state and municipal government, as well as insurance companies, banks, web sites, schools, clubs, retailers, and so on.  Few of these entities routinely share the same information about the same people, and even then, still maintain their own data.  In this respect, devices aren’t that different from humans.

For devices, there’s a Purchase Order, Active Directory and Azure AD, EMS, Configuration Manager, SQL Server (behind multiple systems), HelpDesk systems, Logging systems, and disposal records.  In between, there are tons of home-grown apps/systems as well.

Finding the Wounds

The first thing to do is identify each tool (system, service, etc.) you already have, and identify what it tracks.  Document or diagram what each system tracks (types of information, attributes, etc.) and what pieces of information they have in common.  Common examples include Asset Tags, BIOS serial numbers, as well as manufacturer, model, etc.  For software-based systems, it may also be a GUID, SID or an LDAP cn, etc.

If you’re not primarily a DBA, kidnap one (they can be bribed with food, caffeine and Amazon gift cards).  Design a solution to extract ONLY the information you need to confirm the existence of an asset in each system.  In this design, determine what you need to compare across each system to insure consistency and find missing pieces (gaps).

Note: Be careful with data extraction (or queries) that you don’t over-burden the systems themselves.  This is particularly true for things like Configuration Manager, which are sensitive to SQL performance.

Get some reports to show assets which are not found in all systems, then use that to determine how the information is missing.  This often points to a process that needs to be updated.

For example, you determine that Jimmy, in the Purchasing Department, doesn’t capture some key pieces of information when a shipment arrives.  So you decorate Jimmy’s car with shaving cream and cat litter during lunch time, with a note warning him to pick up the slack.  And Debbie, ignores the weekly email report of machines which haven’t logged into AD in more than 180 days.  So, you sign Debbie up for every porn site mailing list using her personal email address, and cover her desk with cat litter, and Post-It notes with reminders to fill that information in soon.

WARNING: These are simply ridiculous suggestions made by random imaginary homeless people.  The author of this blog does not condone shaving cream, porn sites or Post-It notes.  In fact, the author doesn’t condone this blog.  Any similarity to real persons is unintentional. Batteries not included.  Void where prohibited.

Examples

Some of these may look familiar, as they are EXTREMELY common in most organizations.

  • Computer accounts left in Active Directory, long after a device has been disposed
  • Computer objects missing in ConfigMgr due to restrictive Discovery settings, limited user account, etc.
  • Asset management systems that rely on human data entry to identify assets
  • Lack of documented procedures for new hires to follow, especially in IT
  • Allowing people to “borrow” devices back from the disposal pile after they’ve been retired
  • Failing to update records when assigning an existing device to a different user
  • Relying on device names or descriptions in AD to identify user assignments

Control the Bleeding

  • Use scripting to manage orphaned AD computer accounts.
    • Search by LDAP attributes like PwdLastSet, Last-Logon, etc. (read the “remarks” section of Last-Logon for a general heads-up on using this)  You can modify the GC replication flag for these attributes (be very careful) or make your script query all domain controllers and compare results.
    • Machines which haven’t touched the network in a long time (usually more than 30 days, but it depends on the nature of your business) can be disabled and moved to a special OU using PowerShell (or whatever)
    • If nobody whines after X days, delete the accounts.  If they show-up the next day angry, just rejoin them to the domain and apply liberal amounts of pepper spray to the user. (just kidding, don’t do that)
    • For any automation you concoct, be sure it includes logging and reporting/notification throughout.  And be sure to include some “what-if” support to test without accidentally deleting the CEO’s laptop.  Think PowerShell [CmdletBinding(SupportsShouldProcess=$True)] , and $WhatIfPreference for things that don’t natively support -WhatIf, etc.
  • If you find inconsistencies between your inventory-related systems, determine why.  Then look for ways to replace human input with some sort of automation (PowerShell, PowerShell, PowerShell, a few table spoons of SQL and more PowerShell)
  • Establish (or update) your policies and procedures.  Seek advice from other organizations, books, and blogs.  Ask questions on forums like Slack, Reddit, StackOverflow, etc. as well.  Take your time, but get it right.
  • Be careful to not reinvent any wheels.  Don’t replicate more information than you really need, as it adds risk of creating yet another pool of information that could become isolated later on.

Notice that I lean towards PowerShell and building things.  You may prefer to use a third-party (free or retail) product or service, which is fine.  I come from the era before vendors bought up all the land for corporate software farming.  We had to grow our own goodies from scratch.  That’s not a binary choice however.  You can mix the two, such as using things like Sysinternals, SQL Express, and so on, along with scripting.  You have options.  Options are good.

Connecting the Dots

One final thought, and this crosses a lot of different aspects of IT operations.  This has to do with management support.  So often, the IT folks bemoan not having enough resources, training, or budgeted time, to get out front of the problems and fix them before they continue to grow out of control.  The biggest challenge in this is communication.

Management reads, writes and speaks in terms of money.  Saved or spent, it’s all about money.  A business exists to make money, after all.  IT folks read, write and speak operational efficiency.  It often ends up being like a singles bar, and Stevie Wonder is trying to hit on Helen Keller, but the bartender is just watching the train-wreck while drying glasses with a towel.  Consultants are often the bartender in this scene.

If you want to sell your idea to get support, you need to translate what you want into dollars.  Your idea HAS to either save or earn more money than any other option available to them.  This commercial was cute in its day, but it’s actually more true than anyone expected.

  • For every procedural change you want to make, be sure to identify how much money it will save (or new revenue it earns)
  • Talk to your vendors/suppliers about cost implications (licensing, terms, etc.)
  • Double-check your numbers and have someone in Finance review as well
  • Try to avoid solutions that increase costs to acquire or operate, IF you can find or build an equally capable solution for free.  Remember, you want to save your company money (or find new revenue streams).  If it comes down to one retail solution vs. another, so be it
  • Make your proposal clear enough for your grandfather to understand, even if he’s been dead for years
  • Don’t get too immersed in your solution.  There may be a better one, and ego is the devil

Good luck!

 

Scripting, Technology, windows

Install 17 Apps in 16 minutes without Local Files

So I Tweeted this a few times, but some people DM’d me with questions about how, what, why Chocolatey, rather than NiNite, or some other bundling solution, and so on. Well, Chocolatey is essentially PowerShell. And since it can be installed from a remote URI, I can add layers on top of that to do my own thing.

I don’t need to download any installers, or prepare anything ahead of time (external storage, thumb drives, etc.).  Please read in entirety before forming any plans, judgments, opinions or tasteless jokes.

Ingredients

My setup is as follows:

  • A 5 year old HP Elitebook 9470m with 16 GB memory, and a Samsung EVO 850 SSD
  • A wired ethernet connection (wireless and LTE are fine if you aren’t in a hurry)
  • Windows 10 x64 1803 Enterprise or Professional (fresh/new install)
  • Renamed the Device and reboot

Process

  1. Open PowerShell using Run as Administrator
  2. Enter Set-ExecutionPolicy ByPass -Force
  3. Enter Invoke-Expression ((New-Object System.Net.WebClient).DownloadString(‘<URL>’)) and go

Keep in mind the “<URL>” is a placeholder for YOUR script location.  I have mine in Github, but any URL which publishes the raw file is fine.  By “raw” I mean no formatting garbage, ads, banners, etc. just the raw file contents.

You could also post the script content as a GIST, but for me, the GIST GUID string is too hard to remember unless I happen to be a savant.  So I used the dollar store discounted cheap-o version approach of the Github repo file “raw” link: https://raw.githubusercontent.com/Skatterbrainz/ChocolateyPackages/master/Install-Capps.ps1

That’s it.

Press Enter and grab a coffee.  When it’s done, which in my 18th test is now (on average) 16 minutes and 30 seconds, I’m ready to get busy.

The Code

The actual script is under one of my GitHub repos, so it may be modified after this blog post.  The following is for example purposes only.

#Requires -RunAsAdministrator
#Requires -Version 5
[CmdletBinding(SupportsShouldProcess)]
$time1 = Get-Date
Write-Host "setting up chocolatey" -ForegroundColor Green
if (!(Test-Path "$env:PROGRAMDATA\chocolatey\choco.exe")) {
    Write-Verbose "installing chocolatey"
    try {
        Invoke-Expression ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
        Write-Verbose "chocolatey has been installed. yay!"
    }
    catch {
        Write-Warning "failed to install chocolatey..."
        Write-Warning $_.Exception.Message
        break
    }
}
else {
    Write-Verbose "chocolatey is already installed. yay!"
}
Write-Verbose "installing packages from internal list"
$pkgs = "googlechrome,7zip,notepadplusplus,vlc,slack,sysinternals,azurepowershell,git,visualstudiocode,azurestorageexplorer,keepass,jing,office365proplus,paint.net,putty,wmicc,teamviewer"
$count = 0
foreach ($pkg in $pkgs -split ',') {
    if ($WhatIfPreference) {
        choco install $pkg -whatif
    }
    else {
        choco install $pkg -y
    }
    $count++
}
Write-Host "finished!"-ForegroundColor Green
$time2 = Get-Date
$ts = $time2 - $time1
Write-Host $("$count packages installed. Elapsed time: {0:g}" -f $ts) -ForegroundColor Green

I’m sure you could modify this cheesy example to work better/faster, and jump through more hoops, which is fine.  Please do so!  If you do make a better version (or already have one), please let me know so I can share a link to it with others.

Caveats and Warnings

Nothing comes without possible downsides, even coffee and beer (hard to imagine).

  • Chocolatey, basic/free version, uses a public CDN / repository of packages.
  • If you’re not comfortable relying on source packages on the Internet, you can host your own internal repository and modify chocolatey to point to your controlled location.
  • You can buy business licensing for Chocolatey, which gives you additional tools and support, which I would recommend for business and education type environments.
  • There’s nothing wrong with other methods obviously.  This article is not intended to pitch this is a “better way” by any means.  Just an example of “another way”.  You must choose which cup to drink from.

The script shown and linked above is provided for example purposes only.  There is no warranty or guarantee of any kind, explicit or implied, for any purpose or use, as-is or in derivative works.  The author assumes no liability for alleged damages or loss of data arising from any use.  Users are advised to test in an isolated, non-production environment, to insure fitness and reliability prior to considering in other environments.  Use at your own risk.

Scripting, System Center, Technology

A Windows 10 Imaging 2-Step Boogaloo

It’s a new dance, and it goes like this…

Step 1 – Left foot forward: Image the device with a generic name, unplug, place on a shelf

Step 2 – Right foot to the side: Fetch from shelf, run script to assign to a user, hand device to user, go back to surfing Twitter

What could possibly go wrong?

Caveat Stuff

This “procedure”, if you will, is predicated on a scenario where the devices are NOT going to retain the auto-generated name when going into production.  They will instead use a unique name based on whomever they are assigned to (e.g. SAMaccountName, etc.).  If you can, I strongly recommend NOT doing this, which would seem strange that I’m essentially negating all of the remainder of this stupid blog post and telling you to just follow step 1, sort of.  However, if you insist on using “JSMITH”, or some other ad hoc data entry value, for the device name, then by all means, drink up, snort up, shoot up, and continue reading.  Thank you!

Errata / Disclaimer / Legal Stuff

At no point in any time in inter-galactic history, for any purpose or interstellar war or planetary conflict, shall anything mentioned herein be provided with any semblance of a warranty, guarantee, or promise that it will be error-free or suitable for your needs.  Nor shall this brainless author assume any liability, or responsibility for any direct, indirect, or alleged damages or loss of productivity, possibly attributed to the direct or indirect use of any information provided herein, for any purpose, explicit or implied, notwithstanding hereinafter for any jurisdiction of human societal or governmental law, or any group of suits on a golf course, related therein.  Golf carts and Martinis are not included.

…and One More Thing

Many blog posts / articles tend to portray a tone of “this is how it’s done”.  This blog post is different for two reason: (a) It’s just ONE example of dealing with ONE common scenario, out of quadrillions of bazillions and kadrillions of possible scenarios, and (b) it’s likely to be the dumbest article you’ve read today.

Step 1 – Image and Stage Device

This step is all about imaging a new device (or wipe/reload an existing device) whereby it isn’t immediately assigned to some whiney complainer, oops, I mean user.  It goes on a shelf, gathering dust, while it awaits being assigned to someone.

  1. Create / Copy / Hallucinate a PowerShell script:
    > It derives a name using available data (ex. Serial number, MAC, etc.).
    > Save the script in a shared location to allow for making a Configuration Manager Package.
    > Refer to horrifically inept script example further below.
  2. Create a new Package in Configuration Manager
    > Note: if you already have a OSD-related package for bundling your script goodies, just toss it in with the rest and they’ll play like over-caffeinated kids in one of those gooey McDonald’s Playland ball pits.
    > Distribute or Update Distribution on the Package
  3. Add a step to your OSD Task Sequence
    > Insert just before “Apply Operating System”
    > Run PowerShell Script –> Choose the Package, and enter the script name and parameters/arguments, select “ByPass”
    > Note: If you want to assign a common OU just assign it in the Task Sequence “Apply Network Settings” step, or add your own “Join Domain or Workgroup” step.
  4. Deploy the Task Sequence
    > If you target “All Unknown Computers”, make sure the collection does not have the “OSDComputerName” Collection Variable attached

Step 2 – Provision and Assign to Hapless User

This step is all about getting up from your desk, grunting and complaining the entire way, maybe knocking over your cup of cold coffee, to shuffle slowly over to the dust-covered shelf, fetching a pre-imaged device, and doing some doodling on it so it can be handed to a bitchy customer, oops, again, I mean user.  Okay, in all seriousness, you may be lucky today, and the user is actually a cool person.  But you’re reading my blog, which means you’re probably not that lucky.

  1. Plug device into your network
  2. Find something to talk about while you wait for it to boot up
  3. Log in using your magical omniscient IT wizard power account
  4. Run a crappy half-baked PowerShell script which renames the device and moves it to a special AD Organizational Unit (OU) to suit the user’s department, etc.
  5. Wait for the reboot
  6. Unplug the device
  7. Throw at the user as hard as you can
  8. Go back to reading Facebook and Twitter
  9. Wait for Security to arrive and escort you out of the building

Horrifically Inept Script Examples

I told you they were going to be horrific and inept, but you didn’t think I was serious.

Script 1 – Assign a “Temporary” Device Name during OSD Task Sequence

Save this mess to a file named “Set-DeviceName.ps1”

[CmdletBinding()]
param (
  [parameter(Mandatory=$False)]
  [ValidateNotNullOrEmpty()]
  [string] $Prefix = "TMP"
)
$SerialNum = Get-WmiObject -Class Win32_SystemEnclosure | Select-Object -ExpandProperty SerialNumber
$NewName = "$Prefix-$SerialNum"
# in case you're imaging a VM with a stupid-long serial number...
if ($NewName.Length -gt 15) {
  $SerialNum = $SerialNum.Substring(0,15-($Prefix.Length+1))
}
$NewName = "$Prefix-$SerialNum"
try {
  Write-Verbose "new device name = $NewName"
  $tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
  $tsenv.Value("OSDComputerName") = $NewName
  Write-Verbose "OSDComputerName = $NewName"
}
catch {
  Write-Verbose "not running in a task sequence environment"
  Write-Host "new device name = $NewName"
}

Script 2 – Provision Device for Assigned User

Note: The following chunk of PowerShell code might look impressive, but that’s because I didn’t create all of it.  I just modified original examples shared by John Warnken and Stephen Owen.  Save this mess to a file named “Assign-UserDevice.ps1”.  This script relies on the “Locations.csv” file to provide the list of locations and department codes for the popup form.

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="CSV input file path")]
    [string] $CsvFile = "",
  [parameter(Mandatory=$False, HelpMessage="Form Title")]
    [ValidateNotNullOrEmpty()]
    [string] $FormTitle = "Contoso - Provision Device",
  [parameter(Mandatory=$False, HelpMessage="Maximum UserName character length")]
    [ValidateRange(1,15)]
    [int] $MaxUserNameLength = 11,
  [parameter(Mandatory=$False, HelpMessage="Force Upper Case username")]
    [switch] $IgnoreCase,
  [parameter(Mandatory=$False, HelpMessage="Keep existing OU location")]
    [switch] $KeepOuLocation,
  [parameter(Mandatory=$False, HelpMessage="Apply Changes")]
    [switch] $Apply,
  [parameter(Mandatory=$False, HelpMessage="Do not force a restart")]
    [switch] $NoRestart
)

$ScriptPath = Split-Path -Parent $PSCommandPath
if ($CsvFile -eq "") {
  $CsvFile = Join-Path -Path $ScriptPath -ChildPath "Locations.csv"
}

function Move-ComputerOU {
  param (
    [parameter(Mandatory=$True)]
    [ValidateNotNullOrEmpty()]
    [string] $TargetOU
  )
  $ComputerName = $env:COMPUTERNAME
  $ads=[adsi]''
  $adssearch = New-Object DirectoryServices.DirectorySearcher
  $adssearch.searchroot = $ads
  $adssearch.filter="(objectclass=computer)"
  $adc1 = $adssearch.findall() | Where-Object {$_.properties.item("cn") -like $ComputerName}
  $ComputerDN = $adc1.properties.item("distinguishedname")
  Write-Verbose "distinguishedName = $ComputerDN"
  $adc = [adsi]"LDAP://$ComputerDN"
  $targetOU="LDAP://$targetOU"
  Write-Verbose "target path = $targetOU"
  $adc.psbase.MoveTo($targetOU)
}

if ($MaxUserNameLength -gt 9) {
  Write-Warning "UserName portion cannot be longer than 9 characters when the prefix is 6 characters long"
  break
}

if (!(Test-Path $CsvFile)) {
  Write-Warning "CSV Input file not found: $CsvFile"
  break
}
$LocData = Import-Csv -Path $CsvFile

[xml]$XAML = @' 
<Window 
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
  Title="" 
  Height="200" Width="320" Topmost="True" WindowStyle="ToolWindow" 
  WindowStartupLocation="Manual" Top="200" Left="200" 
  FocusManager.FocusedElement="{Binding ElementName=ComputerName_text}"> 
  <Grid> 
    <Label Name="Label_Warn" Content="" HorizontalAlignment="Left" Foreground="#ff0000" Height="27" Margin="15,0,0,0" VerticalAlignment="Top" Width="300" />
    <Label Name="Label_Loc" Content="Loc+Dept" Foreground="#000000" HorizontalAlignment="Left" Height="27" Margin="15,20,0,0" VerticalAlignment="Top" /> 
    <Label Name="Label_Dlm" Content="-" Foreground="#000000" HorizontalAlignment="Left" Height="27" Margin="125,50,0,0" VerticalAlignment="Top" />
    <Label Name="Label_Num" Content="UserName" Foreground="#000000" HorizontalAlignment="Left" Height="27" Margin="150,20,0,0" VerticalAlignment="Top" />
    <ComboBox Name="Combo_Loc" Margin="20,50,0,0" Height="27" Width="90" HorizontalAlignment="Left" VerticalAlignment="Top" VerticalContentAlignment="Center">
    </ComboBox>
    <TextBox Name="Text_User" Margin="150,50,0,0" Height="27" Width="90" HorizontalAlignment="Left" VerticalAlignment="Top" VerticalContentAlignment="Center" Text="" MaxLength="20" CharacterCasing="Lower" />
    <Button Name="Button_Continue" Content="Continue" Margin="90,100,0,0" HorizontalAlignment="Left" VerticalAlignment="Top" Height="27" Width="100"/> 
  </Grid>
</Window> 
'@
[void][System.Reflection.Assembly]::LoadWithPartialName('presentationframework') 

# Read XAML string and convert into a form object
$reader = (New-Object System.Xml.XmlNodeReader $xaml) 
$Form = [Windows.Markup.XamlReader]::Load( $reader ) 

# Add Form objects as script variables 
$xaml.SelectNodes("//*[@Name]") | ForEach-Object {Set-Variable -Name ($_.Name) -Value $Form.FindName($_.Name)} 

foreach ($loc in $LocData) {
  $LocDept = "$($loc.Loc)$($loc.Dept)"
  $Combo_Loc.AddChild($LocDept)
}

$Form.Title = $FormTitle
$Text_User.Maxlength = $MaxUserNameLength
if (!($IgnoreCase)) {
  $Text_User.CharacterCasing = "Upper"
}
# add form handler for pressing Enter on UserName text box
$Text_User.add_KeyDown({
  if ($args[1].key -eq 'Return') {
    Write-Verbose "action -> user pressed Enter on username textbox"
    $Location = $Combo_Loc.SelectedValue
    $UserName = $Text_User.Text.ToString()
    Write-Verbose "selection -> $Location"
    Write-Verbose "username -> $UserName"
    if (!([string]::IsNullOrEmpty($Location))) {
      $Script:LocIndex = $Combo_Loc.SelectedIndex
      $Script:NewName = $Location+'-'+$UserName
      $Script:Ready = $True
    }
    $Form.Close() 
  }
})
# add form handler for clicking Continue button on exit
$Button_Continue.add_Click({
  Write-Verbose "action -> pressed Continue button"
  $Location = $Combo_Loc.SelectedValue
  $UserName = $Text_User.Text.ToString()
  Write-Verbose "selection -> $Location"
  Write-Verbose "username -> $UserName"
  if (!([string]::IsNullOrEmpty($Location))) {
    $Script:LocIndex = $Combo_Loc.SelectedIndex
    $Script:NewName = $Location+'-'+$UserName
    $Script:Ready = $True
  }
  $Form.Close() 
})
# display the form for the user to interact with

$Form.ShowDialog() | Out-Null

if (!($Script:Ready)) {
  Write-Warning "No selection or entry. Nothing to do."
  break
}

$RowSet = $LocData[$Script:LocIndex]
$OuPath = $RowSet.DeviceOU

if ($Apply) {
  Write-Host "New Name...: $NewName" -ForegroundColor Green
  if (-not ($KeepOuLocation)) {
    Write-Host "OU Path....: $OuPath" -ForegroundColor Green
    Move-ComputerOU -TargetOU $OuPath
  }
  Write-Verbose "renaming computer to $NewName"
  Rename-Computer -NewName $NewName -Force
  if (!($NoRestart)) {
    Restart-Computer -Force
  }
}
else {
  Write-Host "Test Mode (No changes were applied)" -ForegroundColor Cyan
  Write-Host "New Name...: $NewName" -ForegroundColor Cyan
  if (-not ($KeepOuLocation)) {
    Write-Host "OU Path....: $OuPath" -ForegroundColor Cyan
  }
}

Locations.csv File for Assign-UserDevice.ps1

Note: “Loc” can be a building, campus, city, or whatever.  The ADGroup column is for future/optional/possible/potential use for adding the computer to an AD security group as well.

Loc,Dept,DeviceOU,ADGroup
BOS,HR,"OU=Workstations,OU=HR,OU=Boston,DC=Contoso,DC=local",
BOS,RD,"OU=Workstations,OU=Research,OU=Boston,DC=Contoso,DC=local",
MIA,HR,"OU=Workstations,OU=HR,OU=Miami,DC=Contoso,DC=local",
MIA,MK,"OU=Workstations,OU=Marketing,OU=Miami,DC=Contoso,DC=local",
SFO,FN,"OU=Workstations,OU=Finance,OU=SanFrancisco,DC=Contoso,DC=local",
SFO,HR,"OU=Workstations,OU=HR,OU=SanFrancisco,DC=Contoso,DC=local",
SFO,RD,"OU=Workstations,OU=Research,OU=SanFrancisco,DC=Contoso,DC=local",
TMP,HR,"OU=Workstations,OU=HR,OU=Tampa,DC=Contoso,DC=local",

Cheesy Examples

Example: Assign-UserDevice.ps1 -MaxUserNameLength 9 -Verbose

Summary and Conclusion

As you may have surmised by now, everything you’ve read above is completely stupid and useless. You’re shaking your head in disbelief that you skipped some other opportunity to read this, and you should have chosen otherwise, even if that other opportunity was a prostate exam.  You are now dumber for having read this.

You’re welcome.

databases, Scripting, System Center, Technology

Miscellaneous SCCM Configuration Stuff using PowerShell with Fries and a Coke

Rather than trying to build some Frankenstein stack of horrors, I decided to piecemeal this instead. What I mean is that in the past I would approach everything like I did back in my app-dev life, and try to make everything an API stack. But more often, for my needs anyway, I don’t need a giant roll-around tool case with built-in workbench. I just need a toolbox with a select group of tools to fit my project tasks.  This makes it easier to cherry-pick useful portions and ignore, or laugh at the rest, as you see fit.  Anyhow, hopefully some of it is useful to others.

  • Version 1.0 – 06/05/2018 – initial post
  • Version 1.1 – 06/08/2018 – added more crappy examples to bore you to death

Purpose:  Why not?

Intent: Automate some or all of the tasks with installing Configuration Manager on a modern Windows platform using PowerShell.

Caveats: You might have better alternatives to each of these snippets.  That’s cool.

Assumptions:  Most examples are intended for processing on the primary site server or CAS, rather than from a remote workstation.  However, considering the author, they can easily be improved upon.

Disclaimer: Provided “as-is” without warranties, test before using in production, blah blah blah.

Example Code Snippets

Set SQL Server Memory Allocation

Note:  Neither dbatools or sqlps provide a direct means for configuring minimum allocated memory for SQL Server instances.  For the the max-only example, I’m using dbatools for simplicity.  For the min and max example, I’m using SMO, because SMO contains “MO”, and “MO” is used for phrases like “mo money!” and “mo coffee!”

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="SQL Host Name")]
  [string] $SqlInstance = "$($env:COMPUTERNAME).$($env:USERDNSDOMAIN)",
  [parameter(Mandatory=$False, HelpMessage="Mo Memory. Mo Memory!")]
  [int32] $MaxMemMB = 25600
)
# following line is optional unless you've already finished off that bottle of wine
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module dbatools -AllowClobber -SkipPublisherCheck -Force
Import-Module dbatools
Set-DbaMaxMemory -SqlInstance $SqlInstance -MaxMB $MaxMemMB

Using SMO, because it has “mo” in the name…

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="SQL Host Name")]
  [string] $SqlInstance = "$($env:COMPUTERNAME).$($env:USERDNSDOMAIN)",
  [parameter(Mandatory=$False, HelpMessage="Mo Memory. Mo Memory!")]
  [int32] $MaxMemMB = 25600
)
[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null
$srv = New-Object Microsoft.SQLServer.Management.Smo.Server($SQLInstanceName)
if ($srv.status) {
  $srv.Configuration.MaxServerMemory.ConfigValue = $MaxMemMB
  $srv.Configuration.MinServerMemory.ConfigValue = 8192 
  $srv.Configuration.Alter()
}

Set CM Database Recovery Model to Simple

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Server Name")]
  [string] $SqlInstance = "$($env:COMPUTERNAME).$($env:USERDNSDOMAIN)",
  [parameter(Mandatory=$False, HelpMessage="Site Code")]
  [string] $SiteCode = "P01"
)
Import-Module dbatools
Set-DbaDbRecoveryModel -SqlInstance $SqlInstance -Database "CM_$SiteCode" -RecoveryModel SIMPLE

Set CM Database Service Principal Name (SPN)

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="SQL Host Name")]
  [string] $SqlInstance = "$($env:COMPUTERNAME).$($env:USERDNSDOMAIN)",
  [parameter(Mandatory=$False, HelpMessage="SQL Instance Name")]
  [string] $InstanceName = "MSSQLSvc",
  [parameter(Mandatory=$False, HelpMessage="SQL Server Account")]
  [string] $SqlAccount = "$($env:USERDOMAIN)\cm-sql"
)
$SpnShort = $SqlInstance.split('.')[0]
if ((Test-DbaSpn -ComputerName $SqlInstance).InstanceServiceAccount[0] -ne $SqlAccount) {
  $Spn1 = "$InstanceName/$SpnShort:1433"
  $Spn2 = "$InstanceName/$SqlInstance:1433"
  try {
    Set-DbaSpn -SPN $Spn1 -ServiceAccount $SqlAccount -Credential (Get-Credential)
    Set-DbaSpn -SPN $Spn2 -ServiceAccount $SqlAccount -Credential (Get-Credential)
  }
  catch {
    Write-Error $_.Exception.Message
  }
}
else {
  Write-Warning "SPN is already configured.  Go back to sleep"
}

Add CM SQL Service Account to “Log on as a Service” Rights

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Service Account Name")]
  [string] $AccountName = "$($env:USERDOMAIN)\cm-sql"
)
Install-Module carbon -SkipPublisherCheck -AllowClobber -Force
if ((Get-Privilege -Identity $AccountName) -ne SeServiceLogonRight) {
  try {
    Grant-Privilege -Identity $AccountName -Privilege SeServiceLogonRight
  }
  catch {
    Write-Error $_.Exception.Message
  }
}
else {
  Write-Warning "Already granted service logon rights. Continue drinking"
}

Set WSUS IIS Application Pool properties

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Queue Length")]
  [int32] $QueueLength = 2000,
  [parameter(Mandatory=$False, HelpMessage="Private Memory Limit")]
  [int32] $PrivateMemoryLimit = 7372800
)
Import-Module WebAdministration -DisableNameChecking
try {
  Set-ItemProperty IIS:\AppPool\WsusPool -Name queueLength -Value $QueueLength
  Set-ItemProperty IIS:\AppPool\WsusPool -Name recycling.periodicRestart.privateMemory -Value $PrivateMemoryLimit
}
catch {
  Write-Error $_.Exception.Message
}

Move WSUS SQL Database Files

[CmdletBinding()]
param (
    [parameter(Mandatory=$False, HelpMessage="New Database Files Path")]
    [string] $NewFolderPath = "G:\Database"
)
$ServerName = $env:COMPUTERNAME
$DatabaseName = "SUSDB"
$ServiceName = "WsusService"
$AppPool = "WsusPool"

if (!(Test-Path $NewFolderPath)) { mkdir $NewFolderPath -Force }
if (!(Test-Path $NewFolderPath)) {
  Write-Error "Your request died a horrible flaming death."
  break
}
Import-Module WebAdministration
Write-Verbose "stopping WSUS application pool"
Stop-WebAppPool -Name $AppPool
Write-Verbose "stopping WSUS service"
Get-Service -Name $ServiceName | Stop-Service

Import-Module SQLPS -DisableNameChecking
$ServerSource = New-Object "Microsoft.SqlServer.Management.Smo.Server" $ServerName

Write-Verbose "detaching WSUS SUSDB database"
$Db = $ServerSource.Databases | Where-Object {$_.Name -eq $DatabaseName}
$CurrentPath = $Db.PrimaryFilePath
$ServerSource.DetachDatabase($DatabaseName, $True, $True)
$files = Get-ChildItem -Path $CurrentPath -Filter "$DatabaseName*.??f"
Write-Verbose "moving database files to $NewFolderPath"
$files | Move-Item -Destination $NewFolderPath
$files = (Get-ChildItem -Path $NewFolderPath -Filter "$DatabaseName*.??f") | Select-Object -ExpandProperty FullName
Write-Verbose "attaching database files"
# hard-coded 'sa' as the DB owner because I'm lazy AF
$ServerSource.AttachDatabase($DatabaseName, $files, 'sa')

Write-Verbose "starting WSUS service"
Get-Service -Name $ServiceName | Start-Service

Write-Verbose "starting WSUS app pool"
Start-WebAppPool -Name $AppPool

Write-Host "WSUS database files have been moved to $NewFolderPath"

Create System Management AD Container

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Domain Suffix")]
  [string] $DomainSuffix = "DC=contoso,DC=local"
)
if (!(Get-Module -ListAvailable | Where-Object {$_.Name -eq 'ActiveDirectory'})) {
  Install-WindowsFeature RSAT-AD-Tools -IncludeAllSubFeature -IncludeManagementTools
}
Import-Module ServerManager
Import-Module ActiveDirectory

if (!(Get-ADObject -Identity 'CN=System Management,CN=System,'+$DomainSuffix)) {
  New-ADObject -Name 'System Management' -Path 'CN=System,'+$DomainSuffix -Type container |
    Set-ADObject -ProtectedFromAccidentalDeletion:$True -Confirm:$False
}

Grant Permissions on System Management Container (added in 1.1)

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Your Domain Suffix")]
  [string] $DomainSuffix = "DC=contoso,DC=local",
  [parameter(Mandatory=$False, HelpMessage="Site Server Name")]
  [string] $SiteServer = "CM01"
)
$AdObj = [ADSI]("LDAP://CN=System Management,CN=System,$DomainSuffix")
try {
  $computer = Get-ADComputer $SiteServer
  $sid = [System.Security.Principal.SecurityIdentifier] $computer.SID
  $identity = [System.Security.Principal.IdentityReference] $SID
  $privs = [System.DirectoryServices.ActiveDirectoryRights] "GenericAll"
  $type = [System.Security.AccessControl.AccessControlType] "Allow"
  $inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All"
  $ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity, $privs, $type, $inheritanceType
  $AdObj.psbase.ObjectSecurity.AddAccessRule($ACE)
  $AdObj.psbase.commitchanges()
}
catch {
  Write-Error $_.Exception.Message
}

Import Windows 10 OS Image (added in 1.1)

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="OS Source Root Location")]
  [string] $ImageSource = "\\foo\sources\osimages\w10-1803",
  [parameter(Mandatory=$False, HelpMessage="Name to Assign")]
  [string] $OSName = "Windows 10 x64 1803"
)
$Source = "$ImageSource\sources\install.wim"
if (!(Test-Path $Source)) {
  Write-Error "Boom!  And just like that your code ate itself."
  break
}
try {
  New-CMOperatingSystemImage -Name $OSName -Path $Source -Description $OSName -ErrorAction Stop
}
catch {
  Write-Error $_.Exception.Message
}

Import Windows 10 OS Upgrade Package (added in 1.1)

[CmdletBinding()]
param (
 [parameter(Mandatory=$False, HelpMessage="OS Source Root Location")]
 [string] $ImageSource = "\\foo\sources\osimages\w10-1803",
 [parameter(Mandatory=$False, HelpMessage="Name to Assign")]
 [string] $OSName = "Windows 10 x64 1803"
)
if (!(Test-Path $ImageSource)) {
  Write-Error "I bet Jimmy deleted your source folder. You know what to do next."
  break
}
try {
  New-CMOperatingSystemInstaller -Name $OSName -Path $ImageSource -Description $OSName -ErrorAction Stop
}
catch {
  Write-Error $_.Exception.Message
}

Create a Console Folder (added in 1.1)

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Site Code")]
  [string] $SiteCode = "P01",
  [parameter(Mandatory=$False, HelpMessage="Folder Name")]
  [string] $FolderName = "Windows Client",
  [parameter(Mandatory=$False, HelpMessage="Parent Folder")]
  [ValidateSet('Application','BootImage','ConfigurationBaseline','ConfigurationItem','DeviceCollection','Driver','DriverPackage','OperatingSystemImage','OperatingSystemInstaller','Package','Query','SoftwareMetering','SoftwareUpdate','TaskSequence','UserCollection','UserStateMigration','VirtualHardDisk')]
  [string] $ParentFolder = "OperatingSystemImage"
)
Set-Location "$($SiteCode):"
try {
  New-Item -Path "$SiteCode`:\$ParentFolder" -Name $FolderName -ErrorAction Stop
}
catch {
  Write-Error $_.Exception.Message
}

Move a Console Item into a Custom Folder (added in 1.1)

$OsImage = "Windows 10 x64 1803"
$Folder = "\OperatingSystemImage\Windows Client"
try {
  Get-CMOperatingSystemImage -Name $OsImage |
    Move-CMObject -FolderPath $Folder
}
catch {
  Write-Error $_.Exception.Message
}

Semi-Bonus: Create a Device Collection for each OS in AD

[CmdletBinding()]
param (
  [parameter(Mandatory=$False, HelpMessage="Site Code")]
  [string] $SiteCode = "P01"
)
Import-Module ActiveDirectory
$osnames = Get-ADComputer -Filter * -Properties "operatingSystem" | Select-Object -ExpandProperty operatingSystem -Unique
$key = "HKLM:\SOFTWARE\Microsoft\SMS\Setup"
$val = "UI Installation Directory"
$uiPath = (Get-Item -Path $key).GetValue($val)
$modulePath = "$uiPath\bin\ConfigurationManager.psd1"
if (!(Test-Path $modulePath)) {
  Write-Error "Sudden implosion of planetary system.  The end. Roll the credits and dont forget to drop your 3D glasses in the barrel outside."
  break
}
Import-Module $modulePath
Set-Location "$($SiteCode):"
foreach ($os in $osnames) {
  $collname = "Devices - $os"
  try {
    $sched = New-CMSchedule -DurationInterval Days -DurationCount 7 -RecurCount 1 -RecurInterval 7
    New-CMCollection -Name $collname -CollectionType Device -LimitingCollectionName "All Systems" -RefreshType Both -RefreshSchedule $sched -ErrorAction SilentlyContinue
    $query = 'select distinct SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Name, SMS_R_System.SMSUniqueIdentifier, SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Client from SMS_R_System where SMS_R_System.OperatingSystemNameandVersion="'+$os+'"'
    Add-CMDeviceCollectionQueryMembershipRule -CollectionName $collname -RuleName "1" -QueryExpression $query
    Write-Host "collection created: $collname"
  }
  catch {
    Write-Error $_.Exception.Message
  }
}