Flashback Time Again

Drink up – and follow me into the wormhole of utter pointless reminiscence…


So, in the 1980’s I was working as a “senior engineering technician”, which was US Navy speak for “senior draftsman” or “senior drafter”.  We worked with various UNIX based workstations and mainframe systems to develop 3D models of US naval warship things.  Some of the names back then were CADAM, Pro/Engineer, Intergraph, and CADDS 4 or CADDS 5.

Interesting to note that, while AutoCAD existed in the late 1980’s, the US Navy strictly forbid the use of any “PC-based CAD tools” as they were deemed unreliable, inaccurate, “toys” as one admiral stated.  Over time, they gradually allowed the use of AutoCAD and later, Microstation, DesignCAD, Drafix, and a few others, but only for “textual document data” such as title-sheets (contains only tables, notes, and mostly text), while the actual design data was still allowed only for UNIX-based products.  Sometime in the early 1990’s, the Navy finally gave in and allowed PC-based CAD products for all design work.

While my job title sounded like a typical office job, it wasn’t that typical. We often split our time 50/50 with going aboard ships (all over the place), and climbing into the dirtiest, darkest, hottest, coldest and sometimes most dangerous places, on almost every kind of surface vessel the Navy had at the time.  From small frigates and supply ships, even hydrofoil patrol boats, up to aircraft carriers and commercial cargo ships.

In most cases, the scheduling worked out perfectly to send us to somewhere around 100F at 95% humidity to do this, which works great with a morning-after hangover (I was in my 20’s then).  By the time I left that industry, I had set foot into almost every space on a CVN68 class aircraft carrier, and about half of the spaces on LHA and LHD class ships.  I’ve seen a lot of interesting stuff, and got the bumps and scars to remember it by.

Anyhow, back in the office, one of the popular CAD systems of the time was CADDS 4 or CADDS-4X, sold by the Computervision corporation, which was somewhat affiliated with DEC/Digital.

The workstations we used were priced around $35,000 each at the time.  We had around 20 of them in our office.  The mainframe components, the annual subscription, and the annual support costs, were nearly 4 times the cost of the workstations combined.  Hence the “4X”, ha ha ha!  Good thing I didn’t have the checkbook then.

Turf Battles

One of the cool features was the digitizer tablet and pen setup, which looked like the (linked picture), except we had multi-color monitors.  The tablet consisted of a frame, a paper menu matte, and a clear plastic cover/surface.  The center of the tablet area was for drawing and manipulation.  The left, top and right outer regions were filled with menu “buttons”, which were sort of an on-screen button (no touch screens back then).

The buttons were programmable.  😀

We ran three (3) daily shifts to cover the projects, which were on a tight time schedule.  Myself, Kevin and Timmy, split the shifts on workstation P1, for “Piping Systems Division, Station 1”.  Every month or so, we’d swap shifts to keep from going insane.  During one month, I worked First shift (8am – 4pm), Kevin had Second shift (4pm – midnight) and Timmy had Third shift (midnight to 8am).

We met to discuss logistics, and so on, and agreed that we would claim a particular section of the tablet menu to use for our own custom button macro/command assignments.  First world problems, of course.

Timmy didn’t like being confined.  He considered himself a free-range drafter.

Each night, Timmy would change the button assignments on sections I and Kevin had agreed to claim.  This caused some angst, and I’ll explain why…

Sidebar –

Back then, the combination of hardware (processing power, memory caching, storage I/O performance, and network I/O) resulted in slow work, particularly when it came to opening and saving model data.  A typical “main machinery room” (aka. engine room) space model would take around 35-40 minutes to open.  The regular morning process was as follows:

  • 7:30 AM – Arrive at office
  • 7:35 AM – Log into workstation terminal
  • 7:37 AM – Open model file and initiate a graphics “regen all”
  • 7:39 AM – Search for coffee and sugary stuff
  • 7:45 AM – Discuss latest TV shows, movies, sports game, news story
  • 7:59 AM – Run for nearest restroom
  • 8:19 AM – Emerge from restroom
  • 8:20 AM – Model is generated and ready to begin work
  • 8:21 AM – Make first edit
  • 8:21:05 AM – Save!

Now, here’s the rub:  In 1987-88, there was no concept of an “undo” or “redo” in most of the CAD/CAM systems of the day.  So, we made sure to “save often and be careful with every edit”.

Second Rub:  Timmy liked to modify our programmed menu keys, which caused us a lot of headaches.  For example, clicking a button labeled as “Draw Line” might invoke “Translate Layer X” (select all objects on layer “X” and move them to another layer).  A drastic operation that required exiting the part (no save) and re-opening

Third Rub: Closing a model took about five (5) minutes.   So anytime a mistake occurred that required dropping out and coming back in, meant roughly 45-50 minutes of wasted time.  Well, not totally wasted.  It gave us more time to discuss the latest GNR album, Ozzy, whatever movies were out, and so on, and more coffee, and sugary stuff.

So, after repeated attempts to educate Timmy without success, Kevin and I agreed to modify all of Timmy’s command buttons to “exit part no file”.  So Timmy would open his model file and after 40-45 minutes, click “draw circle” and watch his screen go blank with a prompt showing “part file exited successfully” or something like that.

After one (1) day of that, Timmy didn’t mess with our menu buttons again.

By the way, in one of those 3D models, buried way inside one of the machinery spaces, there just might be a pressure gauge dial, among a cluster of other gauge dials, on an obscure corner bulkhead (ship-speak for “wall”), which has Mickey Mouse’s face and hands on it.  An Easter egg from 1988, laying dormant somewhere in an electronic vault somewhere in a dark warehouse in an undisclosed location.

What I’ve Learned from Doing IT Interviews


WARNING: My humor tank is running low today.  This one is a semi-quasi-serious post with sub-humor ramifications and subtle uses of pontificatory inflection.  cough cough…

Like many (most) of you, for years, I’ve been the one sweating through an interview.  I’ve had bad interview experiences, and good ones; maybe even a great one, once or twice.

On the bad list was one with a well-known hardware vendor, where I was introduced to three “tech reviewers” on the call who regularly speak at pretty much EVERY IT conference on Earth, and have written enough books for me to climb a stack and change a light bulb.  I was in over my head, but thankfully, they appreciated my humility and sense of humor (had an interesting follow-on conversation at the end as well, but I’ll leave that for another time).

On the good list was the most-recent interview I had (my current job) where the interviewer took the time to share some fantastic technical advise which helped me on the project I was working on with my previous employer.  More than an interview, it was like a mini-training session.  Needless to say, he liked my mental problem-solving process enough to offer me this job.  Very, very much appreciated.

But this post is really about the flip-side of the interview process; what I’ve learned from interviewing others for various types of positions.  At a former place I was the administrative “lead” of a team of six (6) incredibly skilled people.  Part of my role was to interview new hires for a very uncommon set of skills to fit into that project.

At my current employer, I’ve been interviewing like mad to help a customer fill staffing needs for another set of uncommon skills. Not that the individual skills are necessarily uncommon, but the mix of skills in a single person seems to be uncommon.  I have to say, it’s been both enjoyable, and educational for me.

I hope that this experience helps me with future interviews when I go looking for a new job (or a promotion).

I’ve tried to apply the “good” experiences from my interviewee past as much as possible.  For example, not just grilling candidates to make them sweat, but help them along the way, in a give-and-take discussion.  Not a lecture.  And not a cross-examination.  It’s been eye-opening for me, to say the least.  So here’s what I’ve learned:

1 – Keep it Simple

When asked to respond with a “what would you do if…” scenario, start with the most basic step.  A classic example question is “You have a web server, that relies on a separate SQL host, to support a web application.  After working fine for a while, it now shows an error that it can no longer connect to the SQL host.  What would your first step be?

Bad answers: “I’d check the SQL logs”.  “I’d confirm the SQL security permissions”, “I’d verify that the SQL services were running on the SQL host”, “I’d Telnet to the SQL host”

Better answers: “I’d try to ping the SQL host from the web server”

2 – Know the Basic Basics of your Platform

If the role involves system administration (aka “sysadmin”) duties, you should be familiar with at least the names of features, components, and commands.  You don’t necessarily have to know every syntactical nuance of them, just what they are, and what they’re used for.  For example, “what command would you use to register a DLL?” or “What command would you use to change the startup type of a service?”

If the interviewer doesn’t focus on scripting aspects, then ask if they want to know the command or what PowerShell cmdlet.  Then take it from there.  If they ask about the command, just give them the command.  You don’t need to describe the various ramifications of using the command, or how it would be better/easier/cooler to do it with PowerShell.  If they ask about PowerShell methods, answer with the appropriate cmdlet or just describe the script code at a 100,000 foot level.  That said, if the interviewer is focused on your PowerShell acumen, dive deeper, but ask if that’s what they want to hear first.

3 – Don’t be Afraid to say “I Don’t Know”

If the interview question leaves you stumped, don’t hem and haw, and don’t make up something.  Just say “I don’t know“, but, and I mean BUT…. follow that with some next-step direction.  For example, “I don’t know, but I would research that by going to ___ and searching for ____

4 – Ask Questions

A lot of the time, the interviewer is also looking for indications of how the candidate interacts with a situation, such as an interview.  They want to know if you’re inclined to question and discover each situation, rather than just react to it.  Sometimes, the interviewer will ask you “Do you have any questions?“, and sometimes they won’t.  Regardless, it’s often good to ask at least one or two questions, even if it’s just “what’s the next step?

5 – Get a Critique if Possible

At the end of the interview, unless you feel certain you nailed it, like this, I always recommend asking the interviewer for some feedback how how you did.  Ask if there were any areas you could have responded better.  Don’t worry about getting granular details, just general responses can be very helpful.  Whether it’s technical, personal, or otherwise, anything is pure GOLD when it comes to this.

It’s a rare chance to get some tips that will help you on future interviews.  This is particularly true when you feel pretty sure that the employer isn’t going to make you an offer.  That doesn’t mean you are a failure, it just means you didn’t provide indication for the position they’re looking to fill.

IT Security Methods by Industry

After years (okay, decades,… okay, okay, centuries…..  damn it… alright! alright already, eons… are you happy now?  yes.  I’m THAT freaking old.  I still remember coal-fired computers and horse-drawn airplanes and shit.  My birthday cake is a slice of tree trunk of matching rings, but the table can’t hold the weight anymore.  sheesh!)

What was I saying?  …. (eyes wandering left and right…. … . . .          …  .         …. . .      .   .  )

oh yeah!  I’ve amassed a data set that accurately summarizes the predominant security practices or strategic “methods” leveraged by each major US industry. I warn you: this is highly scientific information.  It may require additional consumption of various questionable substances just to remain conscious while trying to read it all. Here goes.



Method: Place sufficient restrictions on the adoption of new technologies, so as to (A) mitigate unknown vulnerabilities and exploits, (B) insure that those with knowledge of older, proven exploits have died from old age, and (C) keep certain aging consultants employed (because they’re married into your family).  And besides, what’s wrong with COBOL?


Method:  Never leave important IT decisions up to any one person, ever.  In fact, the more people involved, the greater insurance that the decision will eventually be reliable, maybe.  Larger companies focus on perfecting multi-role hyper-proliferated subterfuge logic branching and coalescing processes.  In layman’s terms: they foster greater variety among responses to decision inquiries.  Many have invested heavily in processes which depend entirely on custom hand-stitched, stone-carved, natural leather encased software, usually written by someone who left or died long ago.

Defense Manufacturing

Method: Implement dozens of stop-gap procedures to insure every motion of IT is slowed to the lowest possible, almost un-measurable, velocity.  Think of a Japanese rock garden, only slower.  Where the sand is executive processes and the stones are IT staff, now simply add quick-set cement to the sand mix and sprinkle some water on it.  This insures that even the bad stuff will take forever to make headway, and by that time, the entire system will have been eventually decommissioned.  Forget penetration attempts, even social engineering-based, because they’re often project-oriented, not departmental, so most people have no clue what that next cube is working on.  In fact, they probably don’t use the same network, computers or operating systems.


Method: Relegate “IT” to whomever answers the Craig’s List ad for an “IT Expert”.  Critical skills include: printer management, thumb drives, recovering lost files and emails, and using Excel databases” (that’s not a typo).  Must also have experience with Macs and Windows XP, particularly with kids games.

If they have any in-house “IT” capacity at all, it’s often enough shock to send a consultant into cardiac arrest.  Due to possible legal implications, it’s best to never change passwords for critical user accounts and never, I mean NEVER, delete anything.  Keep everything forever, or as long as you can afford somewhere to store it.


Method:  Agents need to be flexible and mobile.  Everything is done on laptops.  Everything remains on laptops.  No time for that silly, trendy, cloud stuff.  No backups, no cloud sync, but OMFG do NOT let anything happen to that precious data on those roaming laptops!  Thumb drives are forgotten like Matt Damon in Interstellar, waiting for someone to give them a hug, only to have their face shield cracked open and their chip tossed away.  Shit.  Did I give away the plot?

Advertising / Marketing

Method: Hire someone quick, and get back to the conference before the food runs out.


If it’s airlines, use railroad standards.  If railroads, use airlines standards.  Either way, the older the technology the better.  It’s like a cast-iron frying pan, after years of seasoning, or a vintage wine.



Method: Deny all requests for pay increases for five (5) years, reduce promotions from once every five (5) years to once every ten (10) years, discontinue any training programs, and for God’s sake: deny all requests for stupid things like newer software and hardware  It worked in 1995, so it should still work!  Hire a consultant to blame internal staff for every deficiency, terminate and reassign to avoid audit trails and blame the contractor afterwards.

Federal Agencies

Method: Same as municipal, but on a much larger scale.  Every four (4) years, change direction from in-sourcing to out-sourcing, and blame the opposite for any failures that remain.  If conservatives win, out-source to private contractors, where expertise and trust are premium values, after all, when has anyone ever heard of a private contractor doing something wrong in a government position?  Then blame liberals.  If liberals win, open up the job requisition flood gates and hire at will.  However, keep GS-rating pay scales at 1995 levels to avoid asking for tax increases.  This helps insure only the highest-quality employees are onboarded from their previous positions as private contractors or foreign exchange students.  Then blame conservatives for any failures.  Think of it as seasonable employment.

Medical/Dental Practices

Method: Hire the first contracting IT firm that actually shows up.  If they wear those spiffy-looking polo shirts with a slick company logo, they might be too expensive.  Ask if your cousin’s friend graduated tech school yet.  You know, the one who puked all over your sofa when he brought her to crash in your apartment while you were out of town.  That one.  If she’s not available, what about that kid that asked you about spark plugs while you were trying to inflate your car tires that day.



See if you can guess which of these most closely matches the photo above.

Expert Tips for SCCM Log Analysis


1. Locate cmtrace.exe (or another suitable “active” log viewer)
2. Open cmtrace.exe and click “yes” to register it as default log viewer
3. Consume precisely 5 quarts of a strong, caffeinated liquid substance
4. Browse to location (folder) with log files and double-click desired log file
5. Rub eyelids approximately 12 times, make sure to yawn fully and loud
6. Stare at log details and look for any lines colored in red.
7. Ignore red lines which do not actually display an error, but are instead mentioning that they’re looking for an error
8. Ignore yellow lines which do not actually display a warning, but instead show mention of looking for warnings
9. Rub eyelids 12 more times.
10. Announce to whomever interrupts that you’re busy reviewing log files (the louder the better)
11. Open another log file (selected at random)
12. Stare intently at one line, without scrolling
13. Rub chin, squint, and nod slightly. You may also say “hmmmm”
14. Scroll and repeat step 12
15. Repeat steps 10 through 13, approximately 5 more times.
16. Open browser and begin searching for fragments of error messages along with “sccm error log…”
17. Inhale deeply, exhale loudly.
18. Consume more caffeinated liquids
19. Rub eyes some more
20. Lunch break.

(Seriously) 5 Most Common SCCM Issues

Joking aside (for a few minutes anyway)…


The five (5) most common root causes for SCCM site issues that I’ve seen over the past year, working as a consultant.

  • Site scale:  (smallest) 500, (largest) 180,000
  • Site types: CAS (5%), Primary alone (85%), Primary with Secondaries (5%), None (5%) aka “new install”
  • Avg staffing: (IT dept) 12-24 (SCCM admin) 1
  • Avg coffee consumption: 1 cup per 30 minutes
  • Avg sleep: 5.2 hours

1 – Lack of planning before installing the environment

In the past year alone, I’ve run across almost a dozen sites which had a CAS and didn’t need one, or Secondary sites, and didn’t need them, and so on.  Some didn’t have a FSP and could’ve used one.  Some weren’t using the appropriate credentials for client installations, network access and so on.  And lately, many seem to have pinned their plans on outdated platforms, such as Windows Server 2008 R2 or SQL Server 2012.  At least keep them patched (e.g. SQL 2012 SP3 CU9)

2 – Lack of monitoring and following-up on warnings/errors

Of the last 24 customer engagements I’ve been involved with, roughly 60% do not keep a daily watch over site issues (sites, components, clients, content distribution, deployments, etc.).  Of those that do monitor, about half ignore lingering warnings which impact site performance.

3 – Lack of cohesive management

This varies by scale/size of the organization (at least in my world).   Often it’s a matter of job roles and organizational divisions.  For example, DBA’s controlling the SQL Server environment without allowing SCCM admins any direct access (very bad).  Or AD admins who drag their feet (or push back) on requests for schema extensions, keeping AD accounts “clean” and so on.  Or Network Admins who fight back against using PXE, no matter what the rationale.  In many cases, it rolls up to team managers who don’t work well together, so resolving conflicts and barriers is difficult, especially when the CTO or CIO prefer to avoid dealing with it.  My advise: deal with it!  The good of the company outweighs your stupid personal disagreements.

4 – Lack of keeping up on updates

Whether it’s the Windows Server, SQL Server, ADK, MDT or Configuration Manager itself, all of these require persistent support and oversight. Keep them patched.  But more importantly, READ THE PATCH details first.  Understand what’s being “fixed” or “modified” (or deprecated) as well as “known issues”.  You can save yourself a shit-ton (that’s a scientific measurement, by the way) of headaches and support costs by not blindly installing without understanding.  However, do not avoid patching simply because of fear and doubt.  You work in IT, which means “change” is inevitable and continuous.  It’s why the “soft” in “software” exists (trust me, Babbage wasn’t kidding around).

5 – Inefficient use of features

This one alone could be broken out into sub-categories actually, and now that I mentioned it, I will…

a – Ignoring features which are not fully understood (not doing research)

b – Continuing to use outdated methods (disk imaging, for one, like Acronis or Ghost)

c – Ignoring other System Center capabilities (SCOM, Orchestrator, etc.)

d – Not following “best practices” (excessive permissions on common accounts, incorrect client installation settings

e – Paying for 3rd-party products which SCCM (or other System Center) capabilities could provide (depends upon the individual requirements of course)

f – Ignoring 3rd-party products out of fear of the unknown (FUD)

g – Ignoring new features added with each build (current branch), such as Azure, OMS, UA, and mobile device features

h – [my peeve] Inefficient mapping of tools to processes.  Such as ignoring Group Policy in favor of doing everything in SCCM or via scripts. Continuing to use familiar solutions even when newer and better (cheaper, faster, more efficient, more reliable) solutions are available.

i – Insufficient use of Internet search tools (Google, Bing, etc.)

Did I miss anything?

5 Tips for Fixing Broken SCCM DMZ services


The following five (5) tips should help even the most seasoned SCCM expert determine the root cause for problematic DMZ environments.

Reasons you’re having trouble with your SCCM DMZ

1 – You don’t actually have a DMZ

2 – The DMZ doesn’t contain a SCCM site system, nor an AD Forest trust, nor any network connections back into the internal network.  You might also not have any SCCM clients that operate in the DMZ.

3 – You have no idea what “SCCM” or “DMZ” are.  And you don’t really care.

4 – You work in the Finance department.

5 – Why are you reading this?

Sorry – I needed a break from mind-numbing emails and phone calls today.

5 Myths of Modern IT


These are just five (5) of the most common statements/assertions/quotes I’ve overheard over the years while working in IT.  Every time I hear them, I have to take a deep breath and suppress my inner angst (to put it mildly).  This post isn’t all that funny actually, but I ran out of coffee and it’s too late for bourbon on a weeknight.  So I attached my custom-fit tin-foil hat and henceforth pontificate…

“The goal of Automation is that it frees up employees to focus on other important tasks”

Conceptually, this is plausible.  But, and this is a big BUT (and I cannot lie, all you other brothers can’t, oh never mind…), it depends on the source.  ‘Who’ initiates the push towards automation is what determines the validity of this statement the most.  If the premium placed on automation is cultivated in the ranks, this statement can be, and often is, very real.  However, when it’s initiated from the “top” (usually business, rather than technical ranks) it’s almost always (okay, 99.999999999999999999999999999999999999999999999999999999999% of the time) aimed at reducing staff and employee costs.

I’ve seen various spins and flavors of this, depending upon business culture.  The “reduction” can range from departmental shifts, to demotions, contracting-out, layoffs, and outright terminations (depending upon applicable labor laws).  Indeed, as much as I love (and earn a handsome living on) business process automation, using IT resources, I never allow myself to forget the ultimate goal: to reduce human labor demand.  The more I spend time with non-IT management, the more I see evidence to prove this assertion every day.

With that said, if your particular automation incentives are derived internally, push onward and upward.  Don’t let me talk you out of that (why would I?)

“The value of the cloud is that it enables on-prem expansion with fewer constraints”

This is a contextual statement.  Meaning, taken out of context, it is indeed a valid statement.  However, when inserted into standard sales talk (also commonly and scientifically referred to as “talking shit”) it’s often sold as being the premium value in the over-arching model.  In reality, I have seen only two (2) cases, and only heard of two (2) others, out of dozens of cases, where an infinite hybrid model was the ultimate goal of a cloud implementation project.

The majority of enterprise cloud projects are aimed at reducing on-prem datacenters, often to the point of complete elimination.  There’s nothing inherently wrong with that; it makes good business sense.  But selling it under a false pretense is just wrong.  Indeed, of the last five (5) cloud migration projects I’ve been involved with, the customer stated something akin to “I want to get rid of our datacenters” or “I want all data centers gone“.  The latter quote came from a Fortune 100 company CIO, with a lot of datacenters and employees.

“Who needs sleep?”

Don’t fall victim to this utter bullshit.  If you believe you only need a “reboot” as often as your servers do, you’re putting your own life at a lower value than common hardware.  If you’re a “night owl”, that’s fine, but only as long as you adjust your wake-up time to suit.  Always ask yourself where this inclination to never sleep starts.  Is it coming from management?  From your peers?  From personal habit?  If it’s coming from management, move on to a better workplace.  If it’s coming from your peers, you need to expand your network.  If it’s coming from personal habit, fix it.

A few years ago, I fell into the habit of working myself almost (literally) to death.  Mostly from what I call “code immersion”.  That urge to “get one more line done” and then another, and it never ends.  I was averaging 2-3 hours of sleep over the course of a year.  It finally caught up to me in a very bad way.  I’ve since taken action to prevent that from happening again.  I’ve seen way too many people die from not taking care of themselves.  Way too many.  Don’t be another statistic.

“This is cutting edge”

I have another quote (and I’m still trying to identify the true source of it), that runs counter to that: “We live in ancient times“.

Everything we do in IT, and I mean EVERYTHING, will be gone from this Earth long before most of the furniture in your house.  Long before your house is gone.  Statistically speaking, this is a valid statement.  Information Technology is a process, not an end result.  It’s a process of optimizing information access and accuracy, which evolves over time.  The tools and technologies employed to that purpose also evolve.

“The customer is always right”

If they were, then why do they need you?  And more importantly, why are they paying you to help them?  That said, the customer holds the purse strings, and the promise of future work, so don’t ever charge out of the gate with a smug demeanor.  Every new customer engagement should start off deferential.  It should then evolve and progress based on circumstances and communication.   However, anyone who works in IT and insists that the customer is “always right” is misguided or just stupid AF.

Honorable mentions (phrases that annoy the $%^&* out of me)

  • “You can’t afford NOT to!”
  • Excessive use of buzzwords like “holistically”, “literally” and “ummm”
  • “It pays for itself!”
  • “It’s the next ______, only better!”
  • “Why? Because ours is a better solution”
  • “The Cloud is a fad”


Everything you read above could, quite possibly, be entirely rubbish.  After all, I’m a nobody.  I just call it as I see it.