databases, Scripting, System Center, Technology

What Not to Do With ConfigMgr, 1.0.1

[note: this post has been sitting in my drafts folder for over a year, but recent events reminded me to dust it off and post it]

One of my colleagues, the infamous @chadstech, sent a link to our team, to the slide deck from the Channel9 session (MS04) “30 Things you should never do with System Center Configuration Manager” by @orinthomas and @maccaoz. If you haven’t seen (or read) it already, I strongly recommend doing so first.

It’s from 2016, so even though it’s a few years old now, it still holds up very well in mid 2019. However, everyone who’s ever worked with that product knows that the list could become a Netflix series.

This blog post is not going to repeat the above; instead, append the list with some things I still see in a variety of environments today. Things which really should be nipped in the bud, so to speak. Baby steps.

Using a Site Server like a Desktop

Don’t do it. Install the console on your crappy little desktop or laptop and use that. Leave your poor server alone. Avoid logging into servers (in general) unless you REALLY need to perform local tasks, and that’s it. Anything you CAN do remotely, should be done remotely.

If installing/maintaining the ConfigMgr console is your concern: forget that. The days of having to build and deploy console packages are gone. Install it once, and let it update itself when new versions are available. Kind of like Notepad++. Nice and easy.

Why? Because…

  • Using a server as a daily desktop workspace not only drags on resources and performance.
  • It creates a greater security and stability risk to the environment.
  • The more casual you are with your servers, the sloppier you’ll get and eventually you’ll do something you’ll regret

Whatever your excuse has been thus far, stop it.

Anti-Virus Over-Protection

Even in 2019, with so many tools floating about like Symantec, McAfee, Sophos, CrowdStrike, and so on, when I ask if the “exclusions” are configured to support Configuration Manager, I often get a confused look or an embarrassing chuckle. Gah!!! Chalkboard scratch!

There are several lists of things to exclude from “real-time” or “on-demand” scanning, like this one, and this one. Pick one. Failing to do this VERY often leads to breaks in processes like application deployments, software updates deployments, and policy updates.

Also important: with each new release of Configuration Manager, read the release notes and look for new folders, log files, services or processes that may be introduced. Be sure to adjust your exclusions to suit.

Ignoring Group Policy Conflicts

Whatever you’re doing with regards to GPO settings, make damned sure you’re not also doing the same things with Configuration Manager. The two “can” be combined (in rare cases) to address a configuration control requirement, and you can sew two heads on a cow, but that doesn’t mean it’s the best approach.

Pick one, or the other, only. If you have WSUS settings deployed by GPO, and are getting ready to roll out Software Updates Management via Configuration Manager, stop and carefully review what the GPO’s are doing and make adjustments to remove any possible conflicts.

And, for the sake of caffeine: DOCUMENT your settings wherever they live. GPO’s, CI’s or CB’s in ConfigMgr, scheduled tasks, whatever. DOCUMENT THEM! Use the “Comments” or “Description” fields to your advantage. They can be mined and analyzed easily (take a look at PowerShell module GPODOC for example / shameless plug).

One-Site-Fits-All Deployments

I’ve seen places that only use packages, or only use Task Sequences, or only use script wrapping, or only repackage with AdminStudio (or some alternative). That’s like doing every repair job in your house or apartment with a crowbar.

There’s nothing wrong with ANY means of deploying software as long as it’s the most efficient and reliable option for the situation. Just don’t knee-jerk into using one hammer for every nail, screw, and bolt you come across.

Pick the right tool or method for each situation/application. Doing everything “only” one way is ridiculously inefficient and time-wasting.

Sharing SQL Instances

The SQL licensing that comes with a System Center license does not permit hosting third-party products. Not even your own in-house projects, technically speaking. You “can” do it, but you’re not supposed to.

What that means is, when you run into a problem with the SQL Server side of things, and you call Microsoft, and they look at it and see you have added a bunch of unsupported things to it, you’ll likely get the polite scripted response, “Thank you for being a customer. You appear to be running in an unsupported configuration. Unfortunately, we can’t provide assistance unless you are running in a supported configuration. Please address this first and re-open your case, if needed, for us to help? Thank you. Have a nice day. Bye bye now.

And, now, you’re facing an extended duration of what could have been a simple problem (or no problem at all, since your third-party app might be the problem).

Configuration Manager is extremely demanding of it’s SQL resources. Careful tuning and maintenance is VERY VERY VERY often the difference between a smooth-running site, and an absolute piece of shit site. I can’t stress that enough.

Leeching SQL Resources

Some 3rd party products, who I’m advised not to name for various legal reasons, provide “connection” services into the Configuration Manager database (or SMS provider). Attaching things to any system incurs a performance cost.

Before you consider installing a “trial” copy of one of those in your production environment, do it in a test environment first. Benchmark your environment before installing it, then again after. Pay particularly close attention to what controls that product provides over connection tuning (polling frequency, types of batch operations, etc.).

And, for God’s sake (if you’re an atheist, just replace that with whatever cheeseburger or vegan deity you prefer), if you did install some connected product, do some diagnostic checking to see what it’s really doing under the hood.

And just as important: if you let go of the trial (or didn’t renew a purchased license) – UNINSTALL that product and make sure it’s sticky little tentacles are also removed.

Ignoring Backups

Make sure backups are configured and working properly. If you haven’t done a site restore/recovery before, or it’s been a while, try it out in an isolated test environment. Make sure you understand how it works, and how it behaves (duration, results, options, etc. )

Ignoring the Logs

Every single time I get a question from a customer or colleague about some “problem” or “issue” with anything ConfigMgr (or Windows/Office) related, I usually ask “what do the logs show?” I’d say, on average, that around 80% of the time, I get silence or “hold on, I’ll check”.

If you ask me for help with any Microsoft product or technology, the first thing I will do is ask questions. The second thing I will do is look at the appropriate logs (or the Windows Event Logs).

So, when the log says “unable to connect to <insert URL here>” and I read that, and try to connect to same URL and can’t, I will say “Looks like the site isn’t responding. Here’s my invoice for $40,000 and an Amazon gift card”. And then you say “but I could’ve done that for free?!” I will just smile, and hold out my greedy little hand.

Keep in mind that the server and client logs may change with new releases. New features often add new log files to look at.

Check the logs first.

Ignoring AD: Cleanups

Managers: “How accurate is Configuration Manager?”

Answer: “How clean is your environment?”

Managers: (confused look)

If you don’t have a process in place to insure your environment is maintained to remove invalid objects and data, any system that depends on that will also be inaccurate. It’s just a basic law of nature.

Step 1 – Clean up Active Directory. Remove accounts that no longer exist. Move unconfirmed accounts to a designated OU until verified or removed. This process is EASY to automate, by the way.

Step 2 – Adjust ConfigMgr discovery method settings to suit your environment. Don’t poll for changes every hour if things really only change monthly. And don’t poll once a month if things really changes weekly. You get the idea. Just don’t be stupid. Drink more coffee and think it through.

Step 3 – I don’t have a step 3, but the fact that you actually read to this point brings a tear to my eyes. Thank you!

Ignoring AD: Structural Changes

But wait – there’s more! Don’t forget to pay attention to these sneaky little turds:

  • Additions and changes to subnets, but forgetting to update Sites and Services
  • Changes to domain controllers, but not updating DNS, Sites and Services or DHCP
  • Changes to OUs, but forgetting to update GPO links
  • All the above + forgetting to adjust ConfigMgr discovery methods to suit.

Ignoring DNS and DHCP

It’s never DNS!“, is really not that funny, because it’s very often DNS. Or the refusal to admit there might be a problem with DNS. For whatever reason, many admins treat DNS like it’s their child. If you suggest there might be something wrong with it, it’s like a teacher suggesting their child might be a brat, or stupid, or worse: a politician. The other source of weirdness is DHCP and its interaction with DNS.

Take some time to review your environment and see if you should make adjustments to DHCP lease durations, DNS scavenging, and so on. Sometimes a little tweak here and there (with CAREFUL planning) can clean things up and remove a lot of client issues as well.

Check DHCP lease settings and DNS scavenging to make sure they are closely aligned to how often clients move around the environment (physically). This is especially relevant with multi-building campus environments with wi-fi and roaming devices.

Task Sequence Repetition

A few releases back, Microsoft added child Task Sequence features to ConfigMgr. If you’re unaware of this, read on.

Basically, you can insert steps which call other Task Sequences. In Orchestrator or Azure Automation parlance this is very much like Runbooks calling other Runbooks. Why is this important? Because it allows you to refactor your task sequences to make things simpler and easier to manage.

How so?

Let’s say you have a dozen Task Sequences, and many (or all) of them contain identical steps, like bundles of applications, configuration tasks, or driver installations. And each time something needs updating, like a new application version, or a new device driver, you have to edit each Task Sequence where you “recall” it being used. Eventually, you’ll miss one.

That’s how 737 Max planes fall out of the sky.

At the very least, it’s time wasted which could be better spent on other things, like drinking, gambling and shooting guns at things.

Create a new Task Sequence for each redundant step (or group of steps) used in other Task Sequences. Then replace those chunks of goo with a link to the new “child” Task Sequence. Now you can easily update things in one place and be done with it. Easy. Efficient.

Ignoring Staffing

Last, but certainly not least is staffing. Typically, this refers to not having enough of it. In a few cases, it’s too many. If your organization expects you to cover Configuration Manager, and it’s SQL Server aspects, along with clients, deployments, imaging, updates, and configuration policies, AND maintain other systems or processes, it’s time for some discussion, or a new job.

If you are an IT manager, and allow your organization to end up with one person being critical to a critical business operation, that’s foolish. You are one drunk driver away from a massive problem.

An over-burdened employee won’t have time to create or maintain accurate documentation, so forget the crazy idea of finding a quick replacement and zero downtime.

In team situations, it’s important to encourage everyone to do their own learning, rather than depend on the lead “guru” all the time. This is another single point of failure situation you can avoid.

If there’s anyone who knows every single feature, process and quirk within Configuration Manager, I haven’t met them yet. I’ve been on calls with PFE’s and senior support folks and heard them say “Oh, I didn’t know that” at times. It doesn’t make sense to expect all of your knowledge to flow out of one person. Twitter, blogs, user groups, books, video tutorials, and more can help you gain a huge amount of awareness of features and best practices.

That’s all for now. Happy configuring! 🙂

Advertisements
System Center, Technology

7 SCCM Task Sequence Tips

I purposely left out “OSD” in the title, because I see a significant increase in non-OSD tasks being performed with Task Sequences. This includes application deployments, complex configuration sequences, and so on. Whether those could be done more efficiently/effectively using other tools is a topic for another beer-infused, knife-slinging, baseball bat-swinging discussion. Just let me know early-on, so I can sneak out the back door.

Anyhow, this is just a short list of “tips” I find to be useful when it comes to planning, designing, building, testing, deploying and maintaining Task Sequences in a production environment. Why 7? Because it’s supposed to be lucky.

Disclaimer

Are you sitting down? Good. This might be a big shock to you, but I am *not* the world’s foremost expert on Task Sequences, or Configuration Manager. And some (maybe all) of these “tips” may be eye-rolling old news to you. But hopefully, some of this will be helpful to you.

Start Simple!

So often, I see someone jump in and start piling everything into a new Task Sequence at once, and THEN trying it out. This can make the troubleshooting process much more painful and time-consuming than it needs to be. Start with what developers call a “scaffold”, and gradually build on that.

I usually start with the primary task at hand: such as “install Windows 10 bare metal“, test that with only the absolute bare minimum steps required to get a successful deployment. Then add the next-most-important steps in layers and continue on.

However you decide to start, just be sure to test each change before adding the next. It might feel tedious and time-wasting, but it can save you 10 times the hassle later on.

Divide and Conquer

Don’t forget that the latest few builds of ConfigMgr (and MDT btw) support “child”, or nested, Task Sequences. In situations where you have multiple Task Sequences which share common steps, or groups of steps, consider pulling those out to a dedicated Task Sequence and link it where needed. Much MUCH easier to maintain when changes are needed.

Some common examples where this has been effective (there are many more I assure you) include Application Installations, Drivers, Conditional blocks of steps (group has a condition, which controls sub-level steps within it, etc.), and setup steps (detection steps with task sequence variable assignments at the very top of the sequence, etc.)

I’m also surprised how many people are not aware that you can open two Task Sequence editors at the same time, side-by-side, and copy/paste between them. No need to re-create things, when you can simply copy them.

Organize and Label

If you are going to have multiple phases for build/test/deploy for your Task Sequences, it may help to do one (or both) of the following:

  • Use console folders to organize them by phase (e.g. Dev, Test, Prod, and so on)
  • Use a consistent naming convention which clearly identifies the state of the Task Sequence (e.g. “… – Prod – 1.2”)

This is especially helpful with team environments where communications aren’t always optimal (multiple locations, language barriers, time zones, etc.)

Establish a policy and communicate it to everyone, then let the process manage itself. For example: “All you drunken idiots, listen up! From now on, only use Task Sequences with ‘Prod’ in the name, unless you know it’s for internal testing only! Any exceptions to this require you eating a can of bug spray.”

Documentation

Wherever you can use a comment, description, or note, field in anything, you should. This applies to more than ConfigMgr as well. Group Policy Objects and GP settings are rife with not having any explanation about why the setting exists or who created it. Don’t let this mine field creep into your ConfigMgr environment too.

Shameless plug: For help with identifying GPOs and settings (including preferences) which do or don’t have comments, take a look at the GpoDoc PowerShell module, available in the PowerShell Gallery, and wherever crackheads can be found.

The examples below show some common places that seem to be left blank in many (most) organizations I run across.

Other places where documentation (comments) can be helpful are the “Scripts” items, especially the Approval comment box.

Side note: You can query the SQL database view vSMS_Scripts, and check the “Comment” column values to determine what approval comments have been added to each item (or not). Then use the “Approver” column values to identify who to terminate.

Access Control

This is aimed at larger ConfigMgr teams. I’ve seen environments with a dozen “admins” working in the console, all with Full Administrator rights. If you can’t reign that wild-west show in a bit, at least sit down and agree who will maintain Task Sequences. Everyone else should stay out of them!

This is especially important if the team is not co-located. One customer I know was going through a merger (M&A) and, apparently, one group in another country, didn’t like some of the steps in their Windows 10 task sequence, so they deleted the steps. No notifications were sent. It was discovered when the first group started hearing about things missing from newly-imaged devices.

In that case, the things needed were (A) better communications between the two groups, and (B) proper security controls. After a few meetings it was agreed that the steps in question would get some condition tests to control where and when they were enabled.

Make Backups!!!!

Holy cow, do I see a lot of environments where the Backup site maintenance task isn’t enabled. That’s like walking into a biker bar wearing a “Bikers are all sissies!” t-shirt. You’re just asking for trouble.

Besides a (highly recommended) site backup, however, it often pays dividends to make what I call “tactical backups”. This includes such SUPER-BASIC things as:

  • Make a copy of your production task sequences (in the console) – This is often crucial for reverting a bunch of changes that somehow jacks-up your task sequence and you could spend hours/days figuring out which change caused it. Having a copy makes it really easy (and fast) to recover and avoid lengthy impact to production
  • Export your product task sequences – Whether this is part of a change management process (vaulting, etc.) or just as a CYA step, it can also make it easy to recover a broken Task Sequence quickly.

Either of these are usually much less painful than pulling from a site backup.

As a double-added precaution, I highly/strongly recommend that anytime you intend to make a change to a production task sequence, that you make a copy of it first. Then if your edits don’t work, instead of spending hours troubleshooting why a revert attempt isn’t actually reverting, you can *really* revert back to a working version.

Don’t Overdo It

One finally piece of advice is this: Just because you get comfortable using a particular hammer, don’t let this fool you into thinking everything is a nail. Task Sequences are great, and often incredibly useful, but they’re not always the optimal solution to ever challenge. Sometimes it’s best to stick with a very basic approach, like a Package, Application, or even a Script.

I’ve worked with customers who prefer to do *everything* via a Task Sequence. Even when it was obvious that it wasn’t necessary. The reason given was that it was what they were most familiar with at the time. They have since relaxed that default a bit, and saved themselves quite a bit of time. That said, Task Sequences are nice and should always be on your short list of options to solve a deployment need.

Summary

I hope this was helpful. If not, you can also print this out, and use it as a toilet bombing target. Just be sure to load up on a good Mexican lunch before you do. Cheers!

Scripting, System Center, Technology

Captain’s Log: cmhealthcheck

I’ve consumed way way waaaaay too much coffee and tea today. Great for getting things done, not great for my future health.

CMHealthCheck 1.0.8 is in the midst of being waterboarded, kicked, beaten, tasered and pepper-sprayed to make it squeal. I’m close to a final release. Among the changes in testing:

  • Discovery Methods
  • Boundary Groups
  • Site Boundaries
  • Packages, Applications, Task Sequences (just summary), Boot Images (summary), etc.
  • User and Device Collections
  • SQL Memory allocation (max/pct)
  • Fixed “Local Groups” bug
  • Fixed “Local Users” bug
  • Enhanced Logical Disks report
  • Fixed “Installed Software” sorting issue
  • Fixed “Services” sorting issue
  • Fixed null-reference issues with “Installed Hotfixes”

Still in the works:

  • Sorting issue with ConfigMgr Roles installation table
  • Local Group Members listing
  • More details for Discovery Methods
  • Client Settings
  • ADR’s
  • Deployment Summary
  • Enhancements to the HTML reporting features

Stay tuned for more.

Note: The current posted version (as of 3/8/19) is 1.0.7, which is what will install if you use Install-Module.

To load the 1.0.8 test branch, go to the GitHub repo, change the branch drop-down from “master” to 1.0.8 (or whatever the other name happens to be at the time) and then use the Download option to get the .ZIP file. Then extract to a folder, and use Import-Module to import the .psd1 file and start playing.

business, Personal, Society, Technology

Working from Home: The Good. The Bad. The Weird.

This post is not intended to be funny, so if you’re looking for a better joke, watch C/SPAN.  Wait, that was technically a joke.  Oh well.

So, a little (snooze-fest) background to get you in the mood.  Some wine, some Barry White music, dim lights, and…

I’ve now been working remote for an employer since Spring of 2015.  I didn’t seek this, because I always thought of it as a unicorn job.  But I soon discovered how many others have been doing it (in IT) for a long time and how the practice is increasing in popularity.  This is particularly true for consulting more than full-time employment (FTE) or contracting.  Consulting for an employer on full W-2, etc. is more common than independent, but I see quite a few of those as well (mostly online, that is).

Since then, I’ve had many discussions with others who work from home and it seemed odd how little information exists on “best practices” and warning signals.  There are some books, some blogs, some whitepapers out there, but most appear to be focused on a specific area within the topic, rather than taking a holistic view.  I try to avoid using “holistic” because I’ve heard it used to death, but I couldn’t stop myself.

The Good

The advantages for the consultant are pretty obvious, but not always what you expect. I won’t bother with the advantages for employers, because they should already know that, or they’re on the wrong path.

Flexibility is high on the list. When you wake up, when you take lunch, etc. Dress code is optional (more on this later), and feeling more connected to your “home” are often benefits. For example, spending more time with your kids, pets, spouse (in that order, ha ha), etc.

Flexibility also allows you to step away from some conference calls to use the restroom, get coffee, snacks, fetch the mail, etc. as long as you have a wireless headset and a mute button. I can take care of dishes, laundry, feeding the pets, watering plants, all while discussing why Configuration Manager isn’t going to automatically wipe and re-image every machine in the company from the evil PXE-beast.

Another advantage is it’s easier to multi-task (which can also be a disadvantage). For example, while one conference call is droning on about something you’re not involved with, you can work on other tasks, chat with customers, engineers, etc. As for conference calls, it’s often easier to “back-channel” on separate chat sessions, with others on the same call, than when everyone is physically in a room together.

Yet another advantage with online conferencing is the ease of sharing links, documents, etc. via the chat application (Skype, WebEx, etc.) while the meeting is still going.

The Bad

Some of these will vary by your personality, home environment, and other personal factors.

Solitude. It’s not for everyone. If you don’t like working in isolation (like many programmers prefer), and rather have people around you, then working from home may not be ideal. If you suffer from mild to severe depression, even seasonal, it can be tough, but not impossible, to accommodate.

Background distractions/disruptions. Noisy pets. Leaf blowers outside. Fans. All of these can make you a master of the mute button, but if that gets too frequent, customers (and your boss) may become concerned about your ability to focus.

What to do?

None of the following recommendations imply 24/7 focus. It’s okay if you slip off the wagon once in a while. The important thing is to keep them in front of you and try to do them whenever possible. Make a checklist if you want, whatever works. This is aside from the technical side of things, like getting a wireless Bluetooth headset.

  1. Get outside!!! Walk, run, or even sit in a chair. But get outside at least twice a day. Even if the weather sucks. It’s important to mentally feel connected to the outside world. Sunshine, even indirect/cloudy, stimulates chemical balances in your mind and body (proven, go look it up, if you don’t believe me).
  2. Get away from your desk/chair at least once an hour. Walk to another room (or outside, even better). Just move.
  3. Watch your diet. Avoid sugary snacks or putting too much sweetener in your drinks (or drinking canned/bottled sweetened drinks). Being sedentary and consuming unhealthy foods/drinks is one of the fastest ways to lose control of your health. If you think it’s easy to slip on this when working in an office, it’s twice as easy when working from home. Also, keep snacks AWAY from reach. Put all your food, coffee, drinks, etc. in another room, or across the room you’re in. Make yourself have to get up to get them.
  4. Exercise. If you’re so inclined, do some resistance workout activities, or calisthenics to keep the blood flowing and improve your health. Sitting at home is worse than sitting in an office because you don’t even walk from the office entrance to your desk and back. You can easily lose weight doing this, which is a win-win. Nothing fancy. Even arm circles, squats, and so on are better than clicking a mouse all day.
  5. Set Boundaries. Pick a time to “knock off” work and leave it behind. It’s really really reeeeeeeaaaally easy to keep working long after you should quit for the day. It’s bad for your mind, health, and can affect your sleep pattern, mood, appetite, and family time (or pet time).
  6. Have lunch with a friend, colleague, family member, etc., at least once a week if you can. Nothing fancy or expensive, just coffee or a light lunch will do. Conversation is one of the best vaccines against feeling down from isolation. It also keeps your conversation chops sharp for when you go to meet customers.
  7. Go to local meet-ups. This is VERY important for three reasons: It gets you into groups and interacting with others, it gets you away from your home office, and you learn new things. Just watch out for junk food if they provide it.
  8. Change the Scenery. Work from a different location sometimes. A coffee shop, library, park, shopping mall, etc. Whatever fits your ability to focus on what you do for work. Some people prefer busy places, some prefer quiet places. But getting out of the house is important.
  9. Personal Stuff. Shower, shave, groom, like you’re going to the office. Every day.
  10. Dress up. Yes. One of the most common changes people incur when working from home is working in pajamas, sweats, even underwear. It’s easy and comfortable. It can also gradually affect how you feel and how you conduct yourself in conference calls. You don’t need to put on a suit, although that’s fine if you like. Just jeans and a button down shirt or polo, with socks and shoes. And a belt. Believe it or not, aside from getting away from my desk, this is the most challenging one for me.
  11. Avoid cages. If you listen to music or podcasts, news, or TV shows while working, change up your selection. Avoid patterns that subconsciously make your brain feel like you’re standing still.

Anyhow, I hope this is helpful.

humor, Personal, Scripting, Technology

$HoHoHo = ($HoList | Do-HoHos -Days 12) version 1812.18.01

santa-riot

UPDATE: 2018.12.18 (1812.18.01) = Thanks to Jim Bezdan (@jimbezdan) for adding the speech synthesizer coolness!  I also fixed the counter in the internal loop.  Now it sounds like HAL 9000 but without getting your pod locked out of the mother ship. 😀

I’m feeling festive today.  And stupid.  But they’re not mutually exclusive, and neither am I, and so can you!   Let’s have some fun…

Paste all of this sticky mess into a file and save it with a .ps1 extension.  Then put on your Bing Crosby MP3 list and run it.

Download from GitHub: https://raw.githubusercontent.com/Skatterbrainz/Utilities/master/Invoke-HoHoHo.ps1

The function…

function Write-ProperCounter {
    param (
      [parameter(Mandatory=$True)]
      [ValidateRange(1,12)]
      [int] $Number
    )
    if ($Number -gt 3) {
        return $([string]$Number+'th')
    }
    else {
        switch ($Number) {
            1 { return '1st'; break; }
            2 { return '2nd'; break; }
            3 { return '3rd'; break; }
        }
    }
}

The bag-o-gifts…

$gifts = (
    'a partridge in a Pear tree',
    'Turtle doves, and',
    'French hens',
    'Colly birds',
    'gold rings',
    'geese a-laying',
    'swans a-swimming',
    'maids a-milking',
    'ladies dancing',
    'lords a-leaping',
    'pipers piping',
    'drummers drumming'
)
# the sleigh ride...
Add-Type -AssemblyName System.Speech
$Speak = New-Object System.Speech.Synthesis.SpeechSynthesizer

for ($i = 0; $i -lt $gifts.Count; $i++) {
    Write-Host "On the $(Write-ProperCounter $($i + 1)) day of Christmas, my true love gave to me:"
    $speak.speak(“On the $(Write-ProperCounter $($i + 1)) day of Christmas, my true love gave to me,”)
    $mygifts = [string[]]$gifts[0..$i]
    [array]::Reverse($mygifts)
    $x = $i + 1
    foreach ($gift in $mygifts) {
        if ($x -eq 1) {
            $thisGift = $gift
        }
        else {
            $thisGift = "$x $gift"
        }
        Write-Host "...$thisGift"
        $Speak.Speak($thisGift)
        $x--
    }
}

Enjoy!

Projects, Scripting, Technology

The Little (Code) Stuff That (Sometimes) Matters

As a follow-up to the post about tuning PowerShell scripts, this is going to be more general (language neutral).  I’d like to run through some of the “efficiency” or optimization techniques that apply to all program/script languages, due to how they’re parsed, and executed at the lowest layer of a conventional x86/x64 system.

Why?  Good question.  I’ve been digging into some of the MIT OpenCourseware content and it brought back (good) memories from college studies.  So I figured, why not.

Condition Prioritization

Performance isn’t mentioned as much these days outside of gaming or content streaming topics.  But processing any iterative or selective tasks that deals with larger volumes of data can still benefit greatly from some very simple techniques.

Place the most-common case higher in the condition tests.  This is also a part of heuristics, which is basically intuition or educated guessing, etc.  Using pseudo-code, here’s an example:

while ($rownum -lt $total) {
  switch ($dataset[$rownum].SomeProperty) {
    value1 { Do-Something; break; }
    value2 { Do-SomethingElse; break; }
    default { Fuck-It; break; }
  }
  $rownum++
}

Let’s assume that “value2” is found in 90% of the $dataset rows.  In this basic while-loop with a switch-case condition test, a small data set (chewed up into $dataset), won’t reveal much in terms of prioritizing the switch() tests.  Remember, that mess above is “pseudo-code” so don’t yell at me if it blows up if you try to run it.

Anyhow, what happens when you’re chewing through 400 billion rows of terabytes of data? The difference between putting “value2” above “value1” can be significant.

This is most commonly found with initialization loops.  Those are when you start with a blank or unassigned value, and as the loops continue, the starting value is incremented or modified.  There is often a test within the iteration that checks if the value has been modified from the original.  Since the initial (null) value may only exist until the first cycle of the iteration, it would make sense to move the condition [is modified] above [is not modified] since it skip an unnecessary test on each subsequent iteration cycle.

Make sense?  Geez.  I ran out of coffee 3 hours ago, and it almost makes sense to me.  Just kidding.

Sorted Conditions / Re-Filtering

Another pattern that you may run across is when you need to check if a value is contained within an Array of values.  For most situations, you can just grab the array and check if the value is contained within it and all is good.  But when the search array contains thousands or more elements, and you’re also looping another array to check for elements, you may find that sorting both arrays first reduces the overall runtime.  That’s not all, however.

What happens when the search value begins with “Z” and your search array contains a million records starting with “A”?  You will waste condition testing on A-Y.

What if you instead add a step within the iteration (loop) to essentially “pop” the previously checked items off of the search array?  So, after getting to search value “M”, the search array only contains elements which begin with “M” and on to “Z”, etc.

Figure 1 – Static Target Search Array

filtersearch1.png

Figure 2 – Reduction Target Search Array

filtersearch2

To help explain the quasi-mathematical gibberish above: S = Search Time, R = Array Reduction Overhead Time, N = Elements in Search Set.  So R+1 denotes the time incurred by calculating the positional offset, and moving the search array starting index to the calculated value.  Whereas, S alone indicates just starting each iteration on the first element of the (static) target array and incrementing until the matching value is found.

So, what does this look like with PowerShell?  Here’s *one* example…

Figure 3 – PowerShell sample code

param (
  [parameter(Mandatory=$False, HelpMessage="Pretty progressbar, but slower to run!")]
  [switch] $PrettyProgress
)
# build an array of ("A1","A2",...,"A100","B1","B2",...) up to 26 x 100 = 2600 elements

$searchArray = @()
$elementCount = 1000
$tcount = $elementCount * 26
$charArray = @()

cls

Write-Host "building search array..."
for ($i = 65; $i -le (65+25); $i++) {
  $c = [char]$i
  $charArray += $c
  for ($x = 1; $x -le $elementCount; $x++) {
     $cc = "$c$x"
     $searchArray += $cc
     if ($PrettyProgress) { Write-Progress -Activity "$($charArray -join ' ')" -Status "Building array set" -CurrentOperation "$c $x" -PercentComplete $(($m / $tcount) * 100) }
  }
}
# define list of search values...
$elementList = @("A50","C99","D75","K400","M500","T600","Z900")
$randomList  = @("T505","C99","J755","K400","A55","U401","Z960")

Write-Host "`nStatic search array"
foreach ($v in $elementList) {
  $t1 = Get-Date
  $test = ($v -in $searchArray)
  $t2 = Get-Date
  Write-Output "$v = $((New-TimeSpan -Start $t1 -End $t2).TotalSeconds)"
}

# protect the original target array for possible future use...
$tempArray = $searchArray

Write-Host "`nReduction search array"
foreach ($v in $elementList) {
  $t1 = Get-Date
  $test = ($v -in $tempArray)
  $t2 = Get-Date
  # this is the real "R"...
  $pos = [array]::IndexOf($tempArray, $v)
  $tempArray = $tempArray[$pos..$tempArray.GetUpperBound(0)]
  Write-Output "$v = $((New-TimeSpan -Start $t1 -End $t2).TotalSeconds)"
}

Figure 4 – PowerShell example process output

arraysearch1.png

The time values are in seconds, and will vary with each run depending upon thread processing overhead incurred by the host background processes.  But in general, the delta between the matched values in each iteration will be roughly the same.  To see this visually, here’s an Excel version…

Figure 4 – Spreadsheet table and Graph result

arraysearch2

It’s worth noting that the impact of R may vary by language, as well as processing platform (hardware, operating system, etc.) along a different vector than others, but that within the iteration tests, the differences should be roughly similar.

There are other methods to reduce the target array as well, which may depend upon the software language used to process the tasks.  For example, whether the interpreter or compiler makes a complete copy of the search array in the background in order to provide the index offset starting point to the script.

Again, this is all relatively meaningless for smaller data sets, or less complex data structures.  And it really only provides significant value for sequential (ordered) search operations, not for random search operations.

So, some questions might arise from this:

  1. If the source array is not sorted, does the sorting operation itself wipe out the aggregate time savings of the reduction approach?
  2. Where is the “tipping point” that would cause this approach to be of value?

These are difficult to answer.  The nature of the data within the array will have an impact I’m sure, as might the nature by which the array is provided (on demand or static storage, etc.) . To paraphrase a Don Jones statement: “try it and see.”

Now that I’m done pretending to be smart, I’m going to grab a beer and go back to being stupid.  As always – I welcome your feedback.  – Enjoy your weekend!

Scripting, Technology

The Basic Basics of Evolving a Basic (PowerShell) Script, Basically Speaking

hangover_movie_ap

In case it wasn’t obvious from the heading: This is very very very very basic basic stuff.  This is intended for people just starting to work with PowerShell.  Typical scenario:

  • Person creates a script to perform a single task (example: copy files)
  • Person doesn’t consider the future of that script (additional uses)
  • Person reads this article and decides their script may have a future
  • Person reads this article and decides to increase their alcohol consumption rate

I put PowerShell in parenthesis because this topic is really language-agnostic. I’m basing much of this on one of the course lectures from back when I attended Christopher Newport University years ago. In fact, that was when “software” was made from leather, “hardware” was either stone or wood, and processors ran on coal.

But anyhow, the point of this is to revisit something I often see with people who are just starting out with programming or scripting.  That is, how to give a basic script a tune-up, to make it more useful, and develop better coding habits going forward.

The format of this article will take an example script, and gradually (iteratively) modify it to address a few basic aspects that are commonly overlooked.  The example script is “copy-stuff.ps1”.

Version 1.0 – no diaper. poo everywhere, plays with loaded guns and broken liquor bottles in traffic.  But so, sooooooo cute…

$TargetPath = "\\fs02\docs\files"
$files = Get-ChildItem "c:\foo" -Filter "*.txt"
foreach ($file in $files) {
  copy $file.FullName $TargetPath
}

Example usage:

.\Copy-Stuff.ps1

This little chunk of tasty goodness works fine, and you just want to pinch its little fat cheeks and say stupid baby-talk things.  But it’s really rough around the edges.  It sticks forks in electrical outlets, yanks the dog’s tail, and keeps puking on everything.

Some things that would help make this script more useful:

  • Portability
    • What if you wanted to use this same script for different situations?
  • Exception handling
    • What if some (or all) of the things expected cannot be found at runtime?
    • What if the user doesn’t have permissions to source or target locations?
    • What if the target location doesn’t have enough free disk space?
    • What if you want to enforce some safeguards to prevent killing your network or disk space?
  • Self-describing information (help)
    • How can you make this easier for a new person to “figure out”?
  • Gold Teeth
    • Add a little spit-shine polish with your roommate’s best t-shirt

Version 1.1 – Portability and diaper added, learning “da da” already.

param (
  $TargetPath = "\\fs02\docs\files",
  $SourcePath = "c:\foo",
  $FileType = "*.txt"
)
$files = Get-ChildItem $SourcePath -Filter $FileType
foreach ($file in $files) {
  copy $file.FullName $TargetPath
}

Now, the script can be called with -TargetPath, -SourcePath and -FileType parameters to work with different paths and file types.

Example usage:

.\Copy-Stuff.ps1 -TargetPath "c:\folder2" -SourcePath "c:\folder1" -FileType "*.jpg"

But this still doesn’t help with Error Handling.  For example, if the user enter “x:\doofusbrain” or “y:\YoMamaSoBigSheGotLittleMamasOrbitingHer” and they don’t actually exist in the corporeal reality we call “Earth”

Version 1.2 – Error Handling and utensils added, in a high-chair with a cold beer

param (
  $TargetPath = "\\fs02\docs\files",
  $SourcePath = "c:\foo",
  $FileType = "*.txt"
)
if (!(Test-Path $SourcePath) -or !(Test-Path $TargetPath)) {
  Write-Warning "check those paths son. you might be on drugs."
  break
}
$files = Get-ChildItem $SourcePath -Filter $FileType
foreach ($file in $files) {
  copy $file.FullName $TargetPath
}

At this point, it’s portable and checking for things before putting both feet in.  But it still needs some body work.  For example, suppose that the user staggers in from a night in jail, drops their liquor bottle and tries to invoke your script, but instead of putting in a non-existent -SourcePath value, they enter “” (an empty string).

.\Copy-Stuff -TargetPath "dog" -SourcePath "" -FileType "I'm sooo wastsed"

It’s time to add some parameter input validation gyration to this…

Version 1.3 – Kevlar-lined diapers, baby bottle converts into a pink “Hello Kitty!” RPG launcher…

param (
  [parameter(Mandatory=$False)]
    [ValidateNotNullOrEmpty()]
    [string] $TargetPath = "\\fs02\docs\files", 
  [parameter(Mandatory=$False)]
    [ValidateNotNullOrEmpty()]
    [string] $SourcePath = "c:\foo", 
  [parameter(Mandatory=$False)]
    [ValidateSet('TXT','JPG')]
    [string] $FileType = "TXT" 
)
if (!(Test-Path $SourcePath) -or !(Test-Path $TargetPath)) {
  Write-Warning "check those paths son. you might be on drugs."
  break
}
$files = Get-ChildItem $SourcePath -Filter "*.$FileType"
$filecount = $files.Count
$copycount = 1
foreach ($file in $files) { 
  copy $file.FullName $TargetPath 
  Write-Output "copied $copycount of $filecount files"
  $copycount++
}

The indention of each [ValidateNotNullOrEmpty()] and [string] within the param() block are really not necessary.  I added them for visual clarity.  In fact, you could put the entire param() block on a single line, as long as you use comma separators and inhale enough paint solvent fumes first.  I recommend Xylene.

Notice that I switched from nude beach free-for-all party time on -FileType to a suit-wearing, neatly groomed, conservative business person variation using ValidateSet().  This takes away the loaded gun and gives the baby a squirt gun with only a few teaspoons of clean, luke-warm water.

Note: You could swap the [ValidateNotNullOrEmpty()] stuff with [ValidateScript()] and apply some voodoo toilet water magic to test for valid path references *before* diving into the murkiness.  But I already spent $5 on the (Test-Path) bundle and didn’t want to waste it.  Option C would be to inform every user that intentional misuse may result in their vehicle experiencing sudden loss of paint and tire pressure.

But – there’s still at least one more “error” case to consider.  What if the copy operations can’t be completed, no matter what?

What if the script is invoked by a user or service account/context, which doesn’t have sufficient permissions to the source or target locations to read and/or copy (write) the files?  Or what if the target location doesn’t have enough free disk space to allow the files to be copied?  So many “what-if’s”.

Version 1.4 – Old enough to drink, and shoot guns, but still getting carded at the door

param (
  [parameter(Mandatory=$False)]
    [ValidateNotNullOrEmpty()]
    [string] $TargetPath = "\\fs02\docs\files",
  [parameter(Mandatory=$False)]
    [ValidateNotNullOrEmpty()]
    [string] $SourcePath = "c:\foo",
  [parameter(Mandatory=$False)]
    [ValidateSet('TXT','JPG')]
    [string] $FileType = "TXT" 
)
if (!(Test-Path $SourcePath) -or !(Test-Path $TargetPath)) {
  Write-Warning "check those paths son. you might be on drugs."
  break
} 
$files = Get-ChildItem $SourcePath -Filter "*.$FileType"
$filecount = $files.Count
$copycount = 1
foreach ($file in $files) { 
  try {
    copy $file.FullName $TargetPath -ErrorAction Stop
    Write-Output "copied $copycount of $filecount files"
    copycount++
  }
  catch {
    Write-Error $Error[0].Exception.Message
    break
  }
}

There’s much more you can do with error (exception) handling.  You could enforce restrictions on file types, or file sizes.  You could check for the error type and display more targeted explanations, rather than just dumping the $Error[0].Exception.Message content.  For more on this topic, I recommend this.

Version 1.5 – Self-Describing Help with french fries and a lobster bib

param (
  [parameter(Mandatory=$False, HelpMessage = "Destination Path")]
    [ValidateNotNullOrEmpty()]
    [string] $TargetPath = "\\fs02\docs\files",
  [parameter(Mandatory=$False, HelpMessage = "Source Path")]
    [ValidateNotNullOrEmpty()]
    [string] $SourcePath = "c:\foo",
  [parameter(Mandatory=$False, HelpMessage = "File extension filter")]
    [ValidateSet('TXT','JPG')]
    [string] $FileType = "TXT" 
)
if (!(Test-Path $SourcePath) -or !(Test-Path $TargetPath)) {
  Write-Warning "check those paths son. you might be on drugs."
  break
} 
$files = Get-ChildItem $SourcePath -Filter "*.$FileType"
$filecount = $files.Count
$copycount = 1
foreach ($file in $files) { 
  try {
    copy $file.FullName $TargetPath -ErrorAction Stop
    Write-Output "copied $copycount of $filecount files"
    $copycount++
  }
  catch {
    Write-Error $Error[0].Exception.Message
    break
  }
}

Now the script can be poked to display information that describes the purpose of each parameter.  But there’s so much more we can do to this.  But hit pause for a second…

> Why bother?  What’s wrong with a simple “copy from-this to-that” script?

The point of developing a skill/craft/caffeine-habit is to expand your capabilities and your value as a technical resource.  Anyone can fix a leak with duct tape.  But the person who can fix it with duct tape, while chugging a six-pack of beer, and singing Bohemian Rhapsody at the same time, is going to make a higher income.  And besides, it’s just cool stuff to learn.

Version 1.6 – the spit-polish, hand-rubbed, gluten-free version

[CmdletBinding(SupportsShouldProcess=$True)]
param (
  [parameter(Mandatory=$True, HelpMessage = "Destination Path")]
    [ValidateNotNullOrEmpty()]
    [string] $TargetPath,
  [parameter(Mandatory=$True, HelpMessage = "Source Path")]
    [ValidateNotNullOrEmpty()]
    [string] $SourcePath,
  [parameter(Mandatory=$False, HelpMessage = "File extension filter")]
    [ValidateSet('TXT','JPG')]
    [string] $FileType = "TXT" 
)
$time1 = Get-Date
if (!(Test-Path $SourcePath) -or !(Test-Path $TargetPath)) {
  Write-Warning "check those paths son. you might be on drugs."
  break
} 
$files = Get-ChildItem $SourcePath -Filter "*.$FileType"
$filecount = $files.Count
$copycount = 1
foreach ($file in $files) { 
  $pct = $($copycount / $filecount) * 100
  try {
    copy $file.FullName $TargetPath -ErrorAction Stop
    Write-Progress -Activity "Copying $copycount of $filecount files" -Status "Copying Files" -PercentComplete $pct
    $copycount++
  }
  catch {
    Write-Error $Error[0].Exception.Message
    break
  }
}
$time2 = Get-Date
Write-Verbose "completed in $([math]::Round((New-TimeSpan -Start $time1 -End $time2).TotalSeconds,2)) seconds"

Example usage (using -WhatIf):

.\Copy-Stuff.ps1 -SourcePath "c:\folder1" -TargetPath "x:\folder3" -Verbose -WhatIf

Some of the changes added to this iteration:

  • [CmdletBinding(SupportsShouldProcess=$True)] added so we can use Write-Verbose to toggle output display only when we really need it, and use -WhatIf to see what would happen if it actually did happen.
  • Write-Progress added for impressing people who like visual progress indication.
  • Displays total run time at the end, for those who are impatient.

[CmdletBinding()] vs. [CmdletBinding(SupportsShouldProcess=$True)] ?

If your code is going to modify things somewhere, and you’d like to have an option to try it in a “what-if?” mode first, use the longer form above.  If you only want to see verbose output (a la debugging/testing), you can use the shorter form above.  For more detail about this cool feature, and the other options it provides, click here.

Anyhow, I hope this was at least mildly helpful or amusing.  I’m sure half of you didn’t read this far, and half of those that did are rolling your eyes “He should’ve ____.  What a loser.”

Updated: Changed highlight color from dark red to turquoise because it sounds better. 🙂

Updated 2: Fixed “$FileType.*” to “*.$FileType” – thanks to @gpunktschmitz for catching that!

Anyhow, post feedback if you would like.  I’m once again weighing the future of this blog by the feedback (or lack thereof).  It’s starting to feel like talking into an empty room again.