Yes. Again. This time smothered in thick and zesty weird sauce.  But this time, I’m not forking around.  Heh heh.


In the past three weeks, I’ve run into quite a few conversations about this topic and wanted to share some thoughts pertaining thereto herein forthwith and notwithstanding. Words I’ve always wanted to try on, but never had the caffeine to do so.

The basic linchpin of all of this has to do with the static nature of deployments. What I mean is that when you build a deployment, the UNC path is hard-coded into the .ini file. The net result is that whenever the deployment is executed, the consumer (client device) will rely strictly on that path variable for the duration of the process.  Autodesk has posted some helpful guidance on this, but the main issue remains that it is, in fact, a static assignment.

The fugitives:  ADMIN_IMAGE_LOCATION and NETWORK_LOG_PATH (if you enabled network status logging during the deployment build process).  The ADMIN_IMAGE_LOCATION assignment is the center-mass target with red laser dots dancing all over it.

Why is this bad? Well, I’m not saying it is bad.

There may be circumstances where it is not a concern at all. While in others it’s a royal pain in the taint. Particularly when it comes to distributed staging for large deployments. Oh. Big words are awesome. In laymen terms: it’s about copying the deployment share to multiple servers to allow for more targeted distribution across a dispersed WAN environment. (Long inhale)

Basically, when using enterprise class deployment tools, like Microsoft System Center Configuration Manager 2012, you want to avoid distributing a lot of content throughout the site servers unnecessarily.  You’d hate to spend hours waiting for content to roll out to a bunch of DP servers, only to discover that the devices all ran back to the original share to perform their installations.  Because that’s exactly what would happen in this context.

Ok. I’m putting down the dictionary now and backing away. But either way, the discussions I’ve had illuminated (another word I’ve been waiting since 4th grade to try on) some unique approaches to this.

Option 1 – Create Multiple Deployment Shares

Option 2 – Copy a Deployment Share / Manually Edit .ini

Option 3 – Copy a Deployment Share / Manually Edit .ini Version 2.0

Confused? I often am. Let’s break these down some more…

Option 1 is where you just run the Deployment Build process each time for each target server you wish to create. The advantage is a clean setup on each server. The drawback is time. You could watch your entire life pass you by as you wait for that progress bar to ooze across the form.

When leveraging something like SCCM to do the heavy lifting, if you have 6 server shares to employ, you’d have 6 Applications, with 1 or 2 Deployment Types per Application, targeted at 6 Collections of users or devices. Not very tasty to think about.  But keep in mind the side of this that involves downloading to each client cache.

Option 2 is where you create one Deployment Share using the builder app, and then copy it out to other servers. Then you would edit the UNC path setting in each .ini file, except for the original, of course. The advantage is that it can (potentially) save some time. The drawback is waiting for the copying to finish. In SCCM you could then create a deployment type for each share and create a corresponding (SCCM) deployment.

So for six server shares across your WAN, again, you’d have six deployment shares, but possibly only one SCCM Application object, with 6 deployment types, and targeting 6 different collections of users or devices.  Also, the same client cache caveat applies as with Option 1.

Option 3 is kind of interesting in that it takes option 2 in a slightly different direction.  In addition, it’s not just one direction.  There are quite a few sub-directions you could take. That is, if “sub-directions” is a real word or phrase.  Oh, what the heck, I’m claiming it now.  The next time someone asks me for directions, I’m going to offer a few sub-directions as well.  Just to see their reaction.

What option 3 is about, is creating one Deployment Share, and copying it around to each server.  Then you edit the UNC path variable in each .ini to suit each of the replicated server shares.  Use a script to detect the site boundary in which the client resides (AD site or SCCM site, if you have things that elaborately spread out), and the script can invoke the appropriate .ini and fire up the engines.

But Wait! There’s More?

Here’s another trivial consideration regarding SCCM 2012 that can cause a major headache: With Packages you can still configure the Advertisements to “run from distribution point” or “download and run locally”.  However, with Applications and Deployments, that option doesn’t exist.  The content is always downloaded to the local cache.  SCCM 2007 didn’t have “Application” objects or deployment types, just packages and advertisements.  Things were simple.  Like a square wheel: Nice and simple with simple flat sides.

Keep in mind that Autodesk Building Design Services Premium 2015 takes just under an hour to install, on a good day.  That much content streaming down to each hard drive before executing a local installation not only kills disk space, but can clog LAN segments, depending on how many are being initiated at one time. (That’s because BITS is only aware of the local interface, it cannot sense overall, aggregate LAN segment traffic behavior).

Even Worse?  Let’s say you go ahead and allow the content to stream to each device cache and go all wild west on everything.  The content will possibly take 30-45 minutes to download, and another 40-45 to execute.  But here’s the catch:  As soon as the local “setup.exe” kicks off, it will read the UNC path assignment in the local copy of the .ini file, and go immediately back to the UNC path for the rest; Ignoring the local cache entirely.  Time = Wasted.  Disk space = Wasted.  Too much eggnog = Wasted too.

So, what to do?

First, do not configure the SCCM Application source to point at the main Deployment Share!  Instead, create a folder for the installation script(s) and use that for replicating to the DP’s.  Let the script replicate to the DP’s, and it will then download to the devices and kick off locally.  The execution path within the script should point back to the actual UNC deployment share location for the appropriate site (LAN segment, AD site, subnet, Moon base, Martian colony, intergalactic zone mapping, whatever).  From there it will “run from distribution point”, sort of.

Clear as mud?  Drop me a line if you have any questions.  I’m off to slay the traffic beast in the rain on Christmas Eve.  Merry Christmas!  Happy Hannukah!  Happy Kwanzaa! and whatever else you celebrate that involves shared happiness.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s