databases, Scripting, System Center, Technology

What Not to Do With ConfigMgr, 1.0.1

[note: this post has been sitting in my drafts folder for over a year, but recent events reminded me to dust it off and post it]

One of my colleagues, the infamous @chadstech, sent a link to our team, to the slide deck from the Channel9 session (MS04) “30 Things you should never do with System Center Configuration Manager” by @orinthomas and @maccaoz. If you haven’t seen (or read) it already, I strongly recommend doing so first.

It’s from 2016, so even though it’s a few years old now, it still holds up very well in mid 2019. However, everyone who’s ever worked with that product knows that the list could become a Netflix series.

This blog post is not going to repeat the above; instead, append the list with some things I still see in a variety of environments today. Things which really should be nipped in the bud, so to speak. Baby steps.

Using a Site Server like a Desktop

Don’t do it. Install the console on your crappy little desktop or laptop and use that. Leave your poor server alone. Avoid logging into servers (in general) unless you REALLY need to perform local tasks, and that’s it. Anything you CAN do remotely, should be done remotely.

If installing/maintaining the ConfigMgr console is your concern: forget that. The days of having to build and deploy console packages are gone. Install it once, and let it update itself when new versions are available. Kind of like Notepad++. Nice and easy.

Why? Because…

  • Using a server as a daily desktop workspace not only drags on resources and performance.
  • It creates a greater security and stability risk to the environment.
  • The more casual you are with your servers, the sloppier you’ll get and eventually you’ll do something you’ll regret

Whatever your excuse has been thus far, stop it.

Anti-Virus Over-Protection

Even in 2019, with so many tools floating about like Symantec, McAfee, Sophos, CrowdStrike, and so on, when I ask if the “exclusions” are configured to support Configuration Manager, I often get a confused look or an embarrassing chuckle. Gah!!! Chalkboard scratch!

There are several lists of things to exclude from “real-time” or “on-demand” scanning, like this one, and this one. Pick one. Failing to do this VERY often leads to breaks in processes like application deployments, software updates deployments, and policy updates.

Also important: with each new release of Configuration Manager, read the release notes and look for new folders, log files, services or processes that may be introduced. Be sure to adjust your exclusions to suit.

Ignoring Group Policy Conflicts

Whatever you’re doing with regards to GPO settings, make damned sure you’re not also doing the same things with Configuration Manager. The two “can” be combined (in rare cases) to address a configuration control requirement, and you can sew two heads on a cow, but that doesn’t mean it’s the best approach.

Pick one, or the other, only. If you have WSUS settings deployed by GPO, and are getting ready to roll out Software Updates Management via Configuration Manager, stop and carefully review what the GPO’s are doing and make adjustments to remove any possible conflicts.

And, for the sake of caffeine: DOCUMENT your settings wherever they live. GPO’s, CI’s or CB’s in ConfigMgr, scheduled tasks, whatever. DOCUMENT THEM! Use the “Comments” or “Description” fields to your advantage. They can be mined and analyzed easily (take a look at PowerShell module GPODOC for example / shameless plug).

One-Site-Fits-All Deployments

I’ve seen places that only use packages, or only use Task Sequences, or only use script wrapping, or only repackage with AdminStudio (or some alternative). That’s like doing every repair job in your house or apartment with a crowbar.

There’s nothing wrong with ANY means of deploying software as long as it’s the most efficient and reliable option for the situation. Just don’t knee-jerk into using one hammer for every nail, screw, and bolt you come across.

Pick the right tool or method for each situation/application. Doing everything “only” one way is ridiculously inefficient and time-wasting.

Sharing SQL Instances

The SQL licensing that comes with a System Center license does not permit hosting third-party products. Not even your own in-house projects, technically speaking. You “can” do it, but you’re not supposed to.

What that means is, when you run into a problem with the SQL Server side of things, and you call Microsoft, and they look at it and see you have added a bunch of unsupported things to it, you’ll likely get the polite scripted response, “Thank you for being a customer. You appear to be running in an unsupported configuration. Unfortunately, we can’t provide assistance unless you are running in a supported configuration. Please address this first and re-open your case, if needed, for us to help? Thank you. Have a nice day. Bye bye now.

And, now, you’re facing an extended duration of what could have been a simple problem (or no problem at all, since your third-party app might be the problem).

Configuration Manager is extremely demanding of it’s SQL resources. Careful tuning and maintenance is VERY VERY VERY often the difference between a smooth-running site, and an absolute piece of shit site. I can’t stress that enough.

Leeching SQL Resources

Some 3rd party products, who I’m advised not to name for various legal reasons, provide “connection” services into the Configuration Manager database (or SMS provider). Attaching things to any system incurs a performance cost.

Before you consider installing a “trial” copy of one of those in your production environment, do it in a test environment first. Benchmark your environment before installing it, then again after. Pay particularly close attention to what controls that product provides over connection tuning (polling frequency, types of batch operations, etc.).

And, for God’s sake (if you’re an atheist, just replace that with whatever cheeseburger or vegan deity you prefer), if you did install some connected product, do some diagnostic checking to see what it’s really doing under the hood.

And just as important: if you let go of the trial (or didn’t renew a purchased license) – UNINSTALL that product and make sure it’s sticky little tentacles are also removed.

Ignoring Backups

Make sure backups are configured and working properly. If you haven’t done a site restore/recovery before, or it’s been a while, try it out in an isolated test environment. Make sure you understand how it works, and how it behaves (duration, results, options, etc. )

Ignoring the Logs

Every single time I get a question from a customer or colleague about some “problem” or “issue” with anything ConfigMgr (or Windows/Office) related, I usually ask “what do the logs show?” I’d say, on average, that around 80% of the time, I get silence or “hold on, I’ll check”.

If you ask me for help with any Microsoft product or technology, the first thing I will do is ask questions. The second thing I will do is look at the appropriate logs (or the Windows Event Logs).

So, when the log says “unable to connect to <insert URL here>” and I read that, and try to connect to same URL and can’t, I will say “Looks like the site isn’t responding. Here’s my invoice for $40,000 and an Amazon gift card”. And then you say “but I could’ve done that for free?!” I will just smile, and hold out my greedy little hand.

Keep in mind that the server and client logs may change with new releases. New features often add new log files to look at.

Check the logs first.

Ignoring AD: Cleanups

Managers: “How accurate is Configuration Manager?”

Answer: “How clean is your environment?”

Managers: (confused look)

If you don’t have a process in place to insure your environment is maintained to remove invalid objects and data, any system that depends on that will also be inaccurate. It’s just a basic law of nature.

Step 1 – Clean up Active Directory. Remove accounts that no longer exist. Move unconfirmed accounts to a designated OU until verified or removed. This process is EASY to automate, by the way.

Step 2 – Adjust ConfigMgr discovery method settings to suit your environment. Don’t poll for changes every hour if things really only change monthly. And don’t poll once a month if things really changes weekly. You get the idea. Just don’t be stupid. Drink more coffee and think it through.

Step 3 – I don’t have a step 3, but the fact that you actually read to this point brings a tear to my eyes. Thank you!

Ignoring AD: Structural Changes

But wait – there’s more! Don’t forget to pay attention to these sneaky little turds:

  • Additions and changes to subnets, but forgetting to update Sites and Services
  • Changes to domain controllers, but not updating DNS, Sites and Services or DHCP
  • Changes to OUs, but forgetting to update GPO links
  • All the above + forgetting to adjust ConfigMgr discovery methods to suit.

Ignoring DNS and DHCP

It’s never DNS!“, is really not that funny, because it’s very often DNS. Or the refusal to admit there might be a problem with DNS. For whatever reason, many admins treat DNS like it’s their child. If you suggest there might be something wrong with it, it’s like a teacher suggesting their child might be a brat, or stupid, or worse: a politician. The other source of weirdness is DHCP and its interaction with DNS.

Take some time to review your environment and see if you should make adjustments to DHCP lease durations, DNS scavenging, and so on. Sometimes a little tweak here and there (with CAREFUL planning) can clean things up and remove a lot of client issues as well.

Check DHCP lease settings and DNS scavenging to make sure they are closely aligned to how often clients move around the environment (physically). This is especially relevant with multi-building campus environments with wi-fi and roaming devices.

Task Sequence Repetition

A few releases back, Microsoft added child Task Sequence features to ConfigMgr. If you’re unaware of this, read on.

Basically, you can insert steps which call other Task Sequences. In Orchestrator or Azure Automation parlance this is very much like Runbooks calling other Runbooks. Why is this important? Because it allows you to refactor your task sequences to make things simpler and easier to manage.

How so?

Let’s say you have a dozen Task Sequences, and many (or all) of them contain identical steps, like bundles of applications, configuration tasks, or driver installations. And each time something needs updating, like a new application version, or a new device driver, you have to edit each Task Sequence where you “recall” it being used. Eventually, you’ll miss one.

That’s how 737 Max planes fall out of the sky.

At the very least, it’s time wasted which could be better spent on other things, like drinking, gambling and shooting guns at things.

Create a new Task Sequence for each redundant step (or group of steps) used in other Task Sequences. Then replace those chunks of goo with a link to the new “child” Task Sequence. Now you can easily update things in one place and be done with it. Easy. Efficient.

Ignoring Staffing

Last, but certainly not least is staffing. Typically, this refers to not having enough of it. In a few cases, it’s too many. If your organization expects you to cover Configuration Manager, and it’s SQL Server aspects, along with clients, deployments, imaging, updates, and configuration policies, AND maintain other systems or processes, it’s time for some discussion, or a new job.

If you are an IT manager, and allow your organization to end up with one person being critical to a critical business operation, that’s foolish. You are one drunk driver away from a massive problem.

An over-burdened employee won’t have time to create or maintain accurate documentation, so forget the crazy idea of finding a quick replacement and zero downtime.

In team situations, it’s important to encourage everyone to do their own learning, rather than depend on the lead “guru” all the time. This is another single point of failure situation you can avoid.

If there’s anyone who knows every single feature, process and quirk within Configuration Manager, I haven’t met them yet. I’ve been on calls with PFE’s and senior support folks and heard them say “Oh, I didn’t know that” at times. It doesn’t make sense to expect all of your knowledge to flow out of one person. Twitter, blogs, user groups, books, video tutorials, and more can help you gain a huge amount of awareness of features and best practices.

That’s all for now. Happy configuring! 🙂

Advertisements
System Center, Technology

7 SCCM Task Sequence Tips

I purposely left out “OSD” in the title, because I see a significant increase in non-OSD tasks being performed with Task Sequences. This includes application deployments, complex configuration sequences, and so on. Whether those could be done more efficiently/effectively using other tools is a topic for another beer-infused, knife-slinging, baseball bat-swinging discussion. Just let me know early-on, so I can sneak out the back door.

Anyhow, this is just a short list of “tips” I find to be useful when it comes to planning, designing, building, testing, deploying and maintaining Task Sequences in a production environment. Why 7? Because it’s supposed to be lucky.

Disclaimer

Are you sitting down? Good. This might be a big shock to you, but I am *not* the world’s foremost expert on Task Sequences, or Configuration Manager. And some (maybe all) of these “tips” may be eye-rolling old news to you. But hopefully, some of this will be helpful to you.

Start Simple!

So often, I see someone jump in and start piling everything into a new Task Sequence at once, and THEN trying it out. This can make the troubleshooting process much more painful and time-consuming than it needs to be. Start with what developers call a “scaffold”, and gradually build on that.

I usually start with the primary task at hand: such as “install Windows 10 bare metal“, test that with only the absolute bare minimum steps required to get a successful deployment. Then add the next-most-important steps in layers and continue on.

However you decide to start, just be sure to test each change before adding the next. It might feel tedious and time-wasting, but it can save you 10 times the hassle later on.

Divide and Conquer

Don’t forget that the latest few builds of ConfigMgr (and MDT btw) support “child”, or nested, Task Sequences. In situations where you have multiple Task Sequences which share common steps, or groups of steps, consider pulling those out to a dedicated Task Sequence and link it where needed. Much MUCH easier to maintain when changes are needed.

Some common examples where this has been effective (there are many more I assure you) include Application Installations, Drivers, Conditional blocks of steps (group has a condition, which controls sub-level steps within it, etc.), and setup steps (detection steps with task sequence variable assignments at the very top of the sequence, etc.)

I’m also surprised how many people are not aware that you can open two Task Sequence editors at the same time, side-by-side, and copy/paste between them. No need to re-create things, when you can simply copy them.

Organize and Label

If you are going to have multiple phases for build/test/deploy for your Task Sequences, it may help to do one (or both) of the following:

  • Use console folders to organize them by phase (e.g. Dev, Test, Prod, and so on)
  • Use a consistent naming convention which clearly identifies the state of the Task Sequence (e.g. “… – Prod – 1.2”)

This is especially helpful with team environments where communications aren’t always optimal (multiple locations, language barriers, time zones, etc.)

Establish a policy and communicate it to everyone, then let the process manage itself. For example: “All you drunken idiots, listen up! From now on, only use Task Sequences with ‘Prod’ in the name, unless you know it’s for internal testing only! Any exceptions to this require you eating a can of bug spray.”

Documentation

Wherever you can use a comment, description, or note, field in anything, you should. This applies to more than ConfigMgr as well. Group Policy Objects and GP settings are rife with not having any explanation about why the setting exists or who created it. Don’t let this mine field creep into your ConfigMgr environment too.

Shameless plug: For help with identifying GPOs and settings (including preferences) which do or don’t have comments, take a look at the GpoDoc PowerShell module, available in the PowerShell Gallery, and wherever crackheads can be found.

The examples below show some common places that seem to be left blank in many (most) organizations I run across.

Other places where documentation (comments) can be helpful are the “Scripts” items, especially the Approval comment box.

Side note: You can query the SQL database view vSMS_Scripts, and check the “Comment” column values to determine what approval comments have been added to each item (or not). Then use the “Approver” column values to identify who to terminate.

Access Control

This is aimed at larger ConfigMgr teams. I’ve seen environments with a dozen “admins” working in the console, all with Full Administrator rights. If you can’t reign that wild-west show in a bit, at least sit down and agree who will maintain Task Sequences. Everyone else should stay out of them!

This is especially important if the team is not co-located. One customer I know was going through a merger (M&A) and, apparently, one group in another country, didn’t like some of the steps in their Windows 10 task sequence, so they deleted the steps. No notifications were sent. It was discovered when the first group started hearing about things missing from newly-imaged devices.

In that case, the things needed were (A) better communications between the two groups, and (B) proper security controls. After a few meetings it was agreed that the steps in question would get some condition tests to control where and when they were enabled.

Make Backups!!!!

Holy cow, do I see a lot of environments where the Backup site maintenance task isn’t enabled. That’s like walking into a biker bar wearing a “Bikers are all sissies!” t-shirt. You’re just asking for trouble.

Besides a (highly recommended) site backup, however, it often pays dividends to make what I call “tactical backups”. This includes such SUPER-BASIC things as:

  • Make a copy of your production task sequences (in the console) – This is often crucial for reverting a bunch of changes that somehow jacks-up your task sequence and you could spend hours/days figuring out which change caused it. Having a copy makes it really easy (and fast) to recover and avoid lengthy impact to production
  • Export your product task sequences – Whether this is part of a change management process (vaulting, etc.) or just as a CYA step, it can also make it easy to recover a broken Task Sequence quickly.

Either of these are usually much less painful than pulling from a site backup.

As a double-added precaution, I highly/strongly recommend that anytime you intend to make a change to a production task sequence, that you make a copy of it first. Then if your edits don’t work, instead of spending hours troubleshooting why a revert attempt isn’t actually reverting, you can *really* revert back to a working version.

Don’t Overdo It

One finally piece of advice is this: Just because you get comfortable using a particular hammer, don’t let this fool you into thinking everything is a nail. Task Sequences are great, and often incredibly useful, but they’re not always the optimal solution to ever challenge. Sometimes it’s best to stick with a very basic approach, like a Package, Application, or even a Script.

I’ve worked with customers who prefer to do *everything* via a Task Sequence. Even when it was obvious that it wasn’t necessary. The reason given was that it was what they were most familiar with at the time. They have since relaxed that default a bit, and saved themselves quite a bit of time. That said, Task Sequences are nice and should always be on your short list of options to solve a deployment need.

Summary

I hope this was helpful. If not, you can also print this out, and use it as a toilet bombing target. Just be sure to load up on a good Mexican lunch before you do. Cheers!

Uncategorized

Things I actually like

Because I hear that I’m normally dissatisfied with schlocky half-assed semi-passionate stuff, I decided to make a list of things I actually, really do like. These are in no particular order.

  • My wife and kids (okay, and their spouses too)
  • Good food
  • Good beer and wine
  • More good beer and wine
  • Windows 10
  • Good Coffee
  • System Center Configuration Manager
  • Cute baby animals
  • Water. Because it’s hard to make beer and coffee without it
  • Movies
  • News about stupid people being eaten by beasts they were taunting
  • 24 hours without hearing any mention of politics
  • Good BBQ. Really really good BBQ
  • Podcasts
  • Sunny warm weather
  • Flowers
  • The Home Depot
  • Cool techy stuff
  • Tesla
  • Kayaking
  • Hiking
  • Drawing (when I’m in the mood)
  • Playing drums
  • Telling the same story over and over until it induces vomiting in others
  • Trying to figure out custom vehicle license plates
  • Flip flops
  • Ear buds (can’t travel without them!)
  • Frank Zappa (and Dweezil) (okay, and most of the people who have ever passed through the band at one point)
  • A nice smartphone
  • More coffee or beer
  • Hamburgers
  • Tacos
  • Indian food
  • Cholulah Sauce
  • Thai food
  • Bangladeshi food
  • Glennfiddich
  • People who read this far into my silly lists
  • Twitter!
  • Netflix
  • Wegman’s grocery
  • Fry’s
  • Dove Micelar bath soap
  • In-n-Out
  • Mountains
  • That incredible air freshener stuff they spray in most Hilton lobbies
  • Beaches
  • Cops pulling idiots over after they just flipped me off
  • Helping homeless people who don’t ask for it
  • Laughter (gut-busting, face down, smack the table kind)
  • Telling everyone who hasn’t seen Avengers Endgame yet how Thanos turns out to be Samuel L. Jackson at the very end
  • More coffee
Uncategorized

No Answers Yet. Just Questions

So, I’m running into a technical snag and not having much luck solving it, yet. Hence, why I’m posting this here: in the hopes you might see this and respond with an answer. So far, the best (hopeful) solution offered is to run the process as a service (thanks Tony!), which I have not yet tried.

The Challenge

I need to schedule a PowerShell script (shown below) to run daily, using either a service account, or the local ‘SYSTEM’ account (I’ve tried both). This script needs to invoke functions from the PowerShell module ‘dbatools’, and produce output to a log file. Keep in mind that this is a ‘test’ script, and the actual script is far more eye-destructive ugliness than I can share here.

The platforms I’ve tested on (two so far) are the same:

  • Windows Server 2016 (domain-joined member server, entirely different forests/domains)
  • SQL Server 2017
  • PowerShell 5.1 (standard, built-in)
  • dbatools version 0.9.818

The Testing

Here is the script (below). When I run it in an interactive PowerShell console, it works fine. Even when I launch the console using “psexec.exe -i -s powershell -NoProfile” it works absolutely fine.

[CmdletBinding()]
param (
[string] $server = $env:COMPUTERNAME,
[string] $database = "MyDatabase",
[string] $tablename = "table123"
)
Start-Transcript -OutputDirectory "x:\scripts"
try {
Write-Output "importing module: dbatools"
Import-Module dbatools
Write-Output "version: $((Get-Module dbatools).Version -join '.')"
Write-Output "verifying table"
if (Get-DbaDbTable -SqlInstance $server -Database $database -Table $tablename -ErrorAction Stop) {
Write-Output "found it!"
}
else {
Write-Output "not found"
}
}
catch {
Write-Output "error: $($Error[0].Exception.Message)"
}
finally {
Stop-Transcript
}

The Scheduled Task

The Scheduled Task is set to run with the following configuration:

NamePS Test
TriggerDaily at 10:00 am
Actionpowershell -ExecutionPolicy ByPass -NoProfile -File x:\scripts\dbtest.ps1
RunAsNT Authority\System (aka “SYSTEM”)
OtherHighest privileges: enabled (checked)

The Problem

So far, no matter how I configure the task, it skips dbatools entirely and ends. Run it in an interactive console and it works perfectly. Here’s an example transcript from an interactive console invocation:

 Windows PowerShell transcript start
Start time: 20190430105649
Username: SKATTERBRAINZ\SYSTEM
RunAs User: SKATTERBRAINZ\SYSTEM
Machine: CM01 (Microsoft Windows NT 10.0.14393.0)
Host Application: powershell.exe -NoProfile
Process ID: 7300
PSVersion: 5.1.14393.2248
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.2248
BuildVersion: 10.0.14393.2248
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
*************************
Transcript started, output file is x:\scripts\PowerShell_transcript.CM01.ycN6+GrS.20190430105649.txt
importing module: dbatools
version: 0.9.818
verifying table
found it!
*************************
Windows PowerShell transcript end
End time: 20190430105649
*************************

And an example from running via a Scheduled Task as local ‘SYSTEM’ account…

 Windows PowerShell transcript start
Start time: 20190430103137
Username: SKATTERBRAINZ\SYSTEM
RunAs User: SKATTERBRAINZ\SYSTEM
Machine: CM01 (Microsoft Windows NT 10.0.14393.0)
Host Application: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.EXE -ExecutionPolicy ByPass -NoProfile -File x:\SCRIPTS\dbtest.ps1
Process ID: 8312
PSVersion: 5.1.14393.2248
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.2248
BuildVersion: 10.0.14393.2248
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
*************************
importing module: dbatools
*************************
Windows PowerShell transcript end
End time: 20190430103142
*************************

It just stops at the import-module step and skips over to the “finally{}” block. No error is returned via “catch{}” which seems odd to me.

UPDATE: 04/30/2019

I added the following line just above the Import-Module for dbatools…

Write-Output "importing module: dbatools" 
#Import-Module dbatools
#Write-Output "version: $((Get-Module dbatools).Version -join '.')"
Write-Output "verifying database"
if (Get-SqlDatabase -ServerInstance "." -Name $database -ErrorAction Stop) {
Write-Output "database found!"
}
else {
Write-Output "database not found"
}

Here’s the updated transcript output…

importing module: dbatools
verifying database
database found!
verifying table
WARNING: SQLPS or SqlServer was previously imported during this session. If you encounter weird issues with dbatools, please restart PowerShell, then import dbatools without loading SQLPS or SqlServer first.
WARNING: To disable this message, type: Set-DbatoolsConfig -Name Import.SqlpsCheck -Value $false -PassThru | Register-DbatoolsConfig
found it!

Very weird results (to me, anyway).

UPDATE: 04/30/19 NO. 2 (The Electric Boogaloo)

Apparently, PowerShell 5.1 transcript features are what I was relying on to determine output/results (success/fail), which isn’t reliable. So, modifying my code (per suggestion of Shawn Melton, Slack “Sqlcommunity” channel) to use Out-File instead, seems to work!

# bad kitty!
Start-Transcript -OutputFolder "x:\scripts"
Write-Output "this works sometimes"

# good kitty!
$logfile = (Join-Path $pwd "logfile.log")
"$(get-date -format "yyyy-MM-dd hh:mm:ss") try this!" | Out-File $lotfile -Append
Cloud, Scripting

Microsoft Teams and PowerShell

I just started playing around with the MicrosoftTeams PowerShell module (available in the PowerShell Gallery, use Find-Module MicrosoftTeams for more information). Here’s a quick sample of how you can get started using it…

$conn = Connect-MicrosoftTeams

# list all Teams
Get-Team

# get a specific Team
$team = Get-Team -DisplayName "Benefits"

# create a new Team
$team = New-Team -DisplayName "TechSupport" -Description "Technical Support" -Owner "dave@contoso.com"

# add a few channels to the new Team
New-TeamChannel -GroupId $team.GroupId -DisplayName "Forms Library" -Description "Forms and Templates"
New-TeamChannel -GroupId $team.GroupId -DisplayName "Customers" -Description "Information for customers"
New-TeamChannel -GroupId $team.GroupId -DisplayName "Development" -Description "Applications and DevOps teams"

# dump properties for one Team channel
$channelId = Get-TeamChannel -GroupId $team.GroupId |
Where-Object {$_.DisplayName -eq 'Development'} |
Select-Object -ExpandProperty Id

# add a user to a Team
Add-TeamUser -GroupId $team.GroupId -User "dory@contoso.com" -Role Member

Here’s a splatted form of the above example, in case it renders better on some displays…

$conn = Connect-MicrosoftTeams

# list all Teams
Get-Team

# get a specific Team
$team = Get-Team -DisplayName "Benefits"

# create a new Team
$params = @{
DisplayName = "TechSupport"
Description = "Technical Support"
Owner = "dave@contoso.com"
}
$team = New-Team @params

# add a few channels to the new Team
# NOTE: You could form an array to iterate more efficiently
$params = @{
GroupId = $team.GroupId
DisplayName = "Forms Library"
Description = "Forms and Templates"
}
New-TeamChannel @params

$params = @{
GroupId = $team.GroupId
DisplayName = "Customers"
Description = "Information for customers"
}
New-TeamChannel @params

$params = @{
GroupId = $team.GroupId
DisplayName = "Development"
Description = "Applications and DevOps teams"
}
New-TeamChannel @params

# dump properties for one Team channel
$channelId = Get-TeamChannel -GroupId $team.GroupId |
Where-Object {$_.DisplayName -eq 'Development'} |
Select-Object -ExpandProperty Id

# add a user to a Team
$params = @{
GroupId = $team.GroupId
User = "dory@contoso.com"
Role = 'Member'
}
Add-TeamUser @params

humor, Uncategorized

Random thoughts

sir_shower_cat

A short list of things I’ve learned in my fifty-five years on this planet, which has been sitting in my drafts bin for 2 years.

  1. Not everyone who wanders is lost.
  2. Assuming everyone who wanders is lost could mean that you’re lost.
  3. If you hate Coke, it does not automatically mean you love Pepsi.
  4. Newer is not automatically better.
  5. If there’s a salesperson involved, it’s because it needed selling.
  6. Work is never really eliminated.  It’s just moved around.
  7. You can’t truly appreciate something until you’ve worked for it.
  8. The more legs a creature has, the more love it has.  Except when it gets past 8 legs, then it’s scarry.
  9. Most people complain most often about things they know the least about.
  10. The best programming language hasn’t been invented yet.
  11. Every generation wants the next to think that they had all the fun.
  12. Every generation thinks they had to work harder than the next.
  13. If it doesn’t cut expenses, or increase revenue, it’s probably junk.
  14. Fixing bugs is not refactoring.
  15. Chances are good that a reboot will fix it.

Uncategorized

Some Dimented Dimensions of ConfigMgr Data

So, over the past few weeks, like most of the rest of the incredible people on our team, I’ve been working multiple projects for different customers around SCCM and other things. My biggest struggle is not only keeping names and details aligned with whomever I’m speaking with online, but keeping each of their respective constraints and obstacles aligned as well.

Disclaimer: This article just might almost possibly kind of make sense, so please: read to the end, before you print it out, ball it up, and light it on fire. In all seriousness, this is just ONE way to approach one challenge. I’m sure you may have a better way.

In my case, for example, in customer environment “A”, the SCCM admins have full access to the SQL instance which underpins their site, while customer environment “B” has to rely on DBA generosity to get anything, unless it’s available from the SSRS reports, or SCCM Queries.

For customer A (the first one), my tried-and-true cheezy-wheezy SQL-via-PowerShell pipeline happy meal kit works great. This is basically where I drop a library of deep-fried .sql files with a small fry, sugary drink and kids toy all in a nice box, then shoot those through a dbatools PowerShell paintball gun to get a pipeline-capable data pump, which they can do things with (see stupid diagram number 1 below).

Stupid Diagram 1

For customer B (the latter), this happy meal doesn’t work because it’s not gluten-free and doesn’t have the vegan solar-powered kids toy in the box. So that requires a somewhat less robust approach: WQL. SQL (T-SQL actually) is much much faster to process (in general) than squeezing a data cow through a WMI provider wormhole, but you work with what you have.

Stupid Diagram 2

So, to pull the same logical dataset “join” with WQL that the SQL option allows, requires some really arcane backflips with two wooden legs and a pogo stick. Why? I don’t know, but it seems like they didn’t have lunch with the SQL team and instead got high on gasoline fumes or something.

Maybe I’m being short-sighted, but to me, WQL is the JRE of SQL. Actually, that’s not fair, because WQL is at least useful. I know you can make WQL do joins by smoking MOF files in a kiln, but I’m too lazy for that. But I digress.

Anyhow, why do I prefer SQL over WQL for PowerShell automation? Mainly because SQL is easier to do joins, and it provides better performance (CPU, memory, etc.) on average than the WMI provider. It also requires more of the data shaping to be done by PowerShell after the source data is returned, and PowerShell isn’t as fast at performing dataset operations as a relational database. The more shaping you can do within SQL the better.

PowerShell SQL Query – Example 1

#forest gump example...

$sys = Invoke-DbaQuery -SqlInstance "CM01" -Database "CM_P01" -Query "select distinct name0 from v_r_system"

#forest gump the ping pong player example, or using a native join...

$clients = Invoke-DbaQuery -SqlInstance "CM01" -Database "CM_P01" -Query "select distinct sys.name0, cs.model0 from v_r_system sys inner join v_gs_computer_system cs on sys.ResourceID = cs.ResourceID"
# results example...
name0 model0
----- ------
CM01 Virtual Machine
DC01 Virtual Machine
FS01 Virtual Machine
WEB01 Virtual Machine
WS01 Virtual Machine
WS02 Virtual Machine

The downside to this is that the PowerShell code can easily (and quickly) get bloated, and you end up with two languages in one file, which can become a pain to test and troubleshoot over time.

Better yet, separate the two so that the PowerShell script and SQL query statements are in separate files. The SQL queries can be easily opened and tested in SQL Server Management Studio, making things much less painful.

PowerShell SQL Query – Example 2

First, the super-bloated SQL query file example…

SELECT DISTINCT 
dbo.v_R_System.Name0 AS ComputerName,
dbo.v_R_System.AD_Site_Name0 AS ADSiteName,
dbo.v_R_System.Client_Version0 AS ClientVersion,
dbo.v_GS_LOGICAL_DISK.DeviceID0 AS DiskID,
(dbo.v_GS_LOGICAL_DISK.Size0 / 1024.0) AS DiskSize,
(dbo.v_GS_LOGICAL_DISK.FreeSpace0 / 1024.0) AS DiskFree,
ROUND( (dbo.v_GS_LOGICAL_DISK.FreeSpace0 * 1.0 / dbo.v_GS_LOGICAL_DISK.Size0), 2) AS DiskUsed,
dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION.IPEnabled0 AS IPEnabled,
dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION.IPAddress0 AS IPAddress,
dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION.MACAddress0 AS MACAddress,
dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION.DefaultIPGateway0 AS IPGateway,
dbo.v_GS_PROCESSOR.Name0 AS Processor,
dbo.v_GS_X86_PC_MEMORY.TotalPhysicalMemory0 as Memory,
dbo.vWorkstationStatus.OperatingSystem,
dbo.v_GS_OPERATING_SYSTEM.BuildNumber0 as OSBuild,
dbo.v_GS_SYSTEM_ENCLOSURE.SerialNumber0 as SerialNum,
dbo.vWorkstationStatus.LastHardwareScan,
dbo.vWorkstationStatus.LastPolicyRequest,
dbo.vWorkstationStatus.LastMPServerName
FROM
dbo.v_R_System INNER JOIN
dbo.vWorkstationStatus ON
dbo.v_R_System.ResourceID = dbo.vWorkstationStatus.ResourceID LEFT OUTER JOIN
dbo.v_GS_PROCESSOR ON
dbo.v_R_System.ResourceID = dbo.v_GS_PROCESSOR.ResourceID LEFT OUTER JOIN
dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION ON
dbo.v_R_System.ResourceID = dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION.ResourceID LEFT OUTER JOIN
dbo.v_GS_LOGICAL_DISK ON
dbo.v_R_System.ResourceID = dbo.v_GS_LOGICAL_DISK.ResourceID LEFT OUTER JOIN
dbo.v_GS_OPERATING_SYSTEM ON
dbo.v_R_System.ResourceID = dbo.v_GS_OPERATING_SYSTEM.ResourceID LEFT OUTER JOIN
dbo.v_GS_X86_PC_MEMORY ON
dbo.v_R_System.ResourceID = dbo.v_GS_X86_PC_MEMORY.ResourceID LEFT OUTER JOIN
dbo.v_GS_SYSTEM_ENCLOSURE ON
dbo.v_R_System.ResourceID = dbo.v_GS_SYSTEM_ENCLOSURE.ResourceID
WHERE
(dbo.v_GS_LOGICAL_DISK.DeviceID0 = N'C:')
AND
(dbo.v_GS_NETWORK_ADAPTER_CONFIGURATION.IPEnabled0 = 1)
ORDER BY ComputerName

I told you it was bloated. Almost dead-fish-in-a-hot-summer-dumpster bloated. So, I save this carcass into a file named “cm_clients.sql”. This example was actually created in SSMS and saved, but you could use Visual Studio or whatever you prefer to SQL development and testing.

Now let’s hook up the electrodes to the PowerShell beast and get moving…

$rows = Invoke-DbaQuery -SqlInstance "CM01" -Database "CM_P01" -File "c:\queries\cm_clients.sql"

That’s it. One line of PowerShell, while the SQL logic is hiding in separate files. The output looks something like this…

Sidenote: I blogged about this already, but you can populate a folder with a library of .sql files, and use (get-childitem) and pipe it to Out-GrideView to make a handy and cheap GUI menu selector to impress your non-scripting coworkers. Something like this…

$QPath = "\\servername\sharename\folder"
$qfiles = Get-ChildItem -Path $QPath -Filter "*.sql" | Sort-Object Name
if ($qfiles.count -lt 1) {
Write-Warning "$qpath contains no .sql files"
break
}
$qfile = $qfiles | Select -ExpandProperty Name | Out-GridView -Title "Select Query to Run" -OutputMode Single
if ($qfile) {
$filepath = Join-Path $QPath $qfile
Invoke-DbaQuery ... you get the idea...
}

PowerShell WMI Query – Example 3

function Get-CmObjectCollection {
[CmdletBinding()]
param (
[parameter(Mandatory=$True, HelpMessage="SMS Provider Name")]
[ValidateNotNullOrEmpty()]
[string] $Computer,
[parameter(Mandatory=$True, HelpMessage="Site Code")]
[ValidateLength(3,3)]
[string] $SiteCode,
[parameter(Mandatory=$True, HelpMessage="WMI Class Name")]
[ValidateNotNullOrEmpty()]
[string] $ClassName,
[parameter(Mandatory=$False, HelpMessage="Credentials")]
[System.Management.Automation.PSCredential] $Credential
)
$Namespace = "ROOT\SMS\site_$SiteCode"
try {
if ($Credential) {
$result = @(Get-WmiObject -Class $ClassName -ComputerName $Computer -Namespace $Namespace -Credential $Credential -ErrorAction SilentlyContinue)
}
else {
$result = @(Get-WmiObject -Class $ClassName -ComputerName $Computer -Namespace $Namespace -ErrorAction SilentlyContinue)
}
return $result | Select-Object * -ExcludeProperty PSComputerName, Scope, Path, Options, ClassPath,
Properties, SystemProperties, Qualifiers, Site, Container, __GENUS, __CLASS, __SUPERCLASS,
__DYNASTY, __RELPATH, __PROPERTY_COUNT, __DERIVATION, __SERVER, __NAMESPACE, __PATH
}
catch {
Write-Error $Error[0].Exception.Message
}
}

Without trying to shoehorn the entire set of SQL joins above into a WQL variation, which would look horrific without a ton of refactoring on Adderal and an ice bath, you can see with just two (2) datasets how this doesn’t scale very well…

$sys = Get-CmObjectCollection -Computer "CM01" -SiteCode "P01" -ClassName "SMS_R_System"
$cs = Get-CmObjectCollection -Computer "CM01" -SiteCode "P01" -ClassName "SMS_G_System_Computer_System"
$sys | % {
$resID = $_.ResourceID
$name = $_.Name
$model = $cs | ? {$_.ResourceID -eq $resID} | Select -ExpandProperty Model
$props = [ordered]@{
Name = $name
Model = $model
}
New-Object PSObject -Property $props
}

I suppose with enough brain juice, you could separate the query and processing logic with the WQL approach as well, but I’ll let you do that if you want.

Regardless of which approach you choose (or have chosen for you), the main thing to focus on is the data. With PowerShell you get the capability of pulling objects, which means you can filter and sort and do almost anything to shape the data you want to do what you need.

Conclusion

If you have the option, always use SQL. If you don’t, WQL can work, it’s just not as fun. Kind of like comparing Motley Crue with Lawrence Welk.

Brain is fried. Kirk out. Bed time.