1wearandtear

It’s time for another round of pompously preposterous pretentious pontification

So, the Github trial was an experiment, but I really chose the wrong venue to host a project aimed at crowdsourcing or feedback channeling.  SourceForge might be better for that sort of thing actually, but there’s probably better solutions as well.  But regardless of all that, the feedback was not sufficient to justify continued effort.  So… It’s time to pontificate and postulate, shall we?  Let’s!

Preface

It’s important to note that the views expressed herein are my own, and based on my own limited experiences.  I am not an MVP or anything, so any, or all of the statements made below could be argued and I’m sure many could be proven wrong.  I lay them out here in the case that they may be helpful to someone else in some way.  That’s it.

What is (or was) CMWT?

For those that didn’t check it out, and ignored my previous blabber, CMWT, or ConfigMgr Web Tools, was a project to put a “free” (as in open source) web interface on Microsoft System Center Configuration Manager.  It also included a considerable number of features for managing Active Directory in the same console interface, and linking the two.  There were also additional “real-time” resource management tools for invoking remote tasks on clients in “real-time” from the web console.

In the end, after several repeated attempts to get this idea off the ground, the results were less than satisfying.  I’d compare it to trying to fly from a cliff wearing wings made of cardboard boxes.  I flapped a lot (in more ways than one) and it was possibly entertaining, but that’s about it.  But anyhow…

Model Digressions

I’ve been asked a few times (very few) how the plumbing works with regards to CMWT, Configuration Manager and Active Directory.  Not that this really matters at this point, but why not digress a bit?

The traditional approach to establishing external interfaces with Configuration Manager is (or has been) via the “SMS Provider”, or WMI/WBEM interface channel API.  Even the PowerShell cmdlets are based on this same mode.  The problem I’ve found is that for performing complex “read-operations” against the data store (ultimately: SQL Server), the performance is much slower than via ADO select-operations or execute-operations (for SP’s, TVF’s, etc.)

So, I followed the path of least resistance, as it pertains to performance bottlenecks, and opted for using the SMS Provider for change-operations (adding members to collections, removing members, modifying other site or site resource aspects), and ADO for read-operations.

What strikes me odd is that I’ve had two people in particular express disagreement over that approach, but then I remind them that SSRS is the underpinning of the Reporting Services Point role, so I’m not breaking new ground here by any means.  For the majority of users they really don’t care as long as it (A) works reliably, and (B) doesn’t cause noticeable drag on the site server performance (it doesn’t).

I have noticed that in cases where additional (quantifiable) performance drag was experienced, it was always due to underlying SQL issues.  This includes lack of proper resource allocation (too little memory, improper memory limits, improper disk allocations), or poorly maintained resources (index fragmentation, lack of indexing, etc.).  A quick point to one of Steve Thompson’s articles usually takes care of that.

The Web Application Model

Another common question is about the choice of ASP versus ASP.NET and MVC, etc.  Well, to be honest, I just glued together years of work done in ASP and made it happen in short order.  In fact, the total amount of time invested in CMWT from 1.0 to 1603.21 is 42 hours.  Yes, that’s correct, 42 hours.

Advantages

  • Centralized portal for management
  • Centralized portal for access control and auditing
  • Reduces application (console) deployments
  • Enables access from more platforms (mobile devices, etc.)
  • Easier to expose via VDI / RDP and even non-VPN channels
  • Independent of underlying products (independent upgrade cycle)
  • Extends capabilities without modifying underlying products (ConfigMgr, AD, etc.)
  • Allows for independent feature development

ASP versus ASP.NET

Yes, ASP.NET offers many advantages over ASP, but when you pick them apart, they really distance themselves from classic ASP when it comes to scale.  The bigger the project (more developers), the bigger the processing load (larger data sets, more transactions), and the SDLC is required (isolation, traceability, version control), then ASP.NET is the obvious choice.  For small projects, handling small data sets and low volume transactions, with a 1-man development team: it does not matter.

In fact, just to cite the age-old “if it ain’t broke, don’t fix it” adage, there are still plenty of public (and many, many more private) web sites running on classic ASP.  Banks, airlines and governments, to name a few, but even Microsoft still relies on “old technology” within their latest products.

Online versus Offline Transactions

Another decision I had to make, and remake many times, was how best to handle UI-initiated actions on the backend.  Is it best to use “real-time” or “queued” processing?

For some aspects, that decision was made for me actually.  For example, the decision to use the SMS Provider for change-requests, means offline processing, due to the way the SMS Provider queues requests in the background for SQL and other processes to handle sequentially.

For non-ConfigMgr change-requests, I’ve tried SQL databases for queuing transactions, and web services with scheduled job managers, and real-time (direct) execution. This includes things like invoking ConfigMgr Client actions on remote clients, invoking Windows API and command calls on remote clients, and so on.

Advantages of Offline Processing

  • Structured/Ordered Processing Control
  • Auditing and Reporting capabilities
  • Separation of application and transaction processing

Advantages of Online Processing

  • Real-time (or near real-time) execution
  • Fewer moving parts
  • Distributed transaction loads (less demand on central server)

But there’s more.  Yes.  In addition to the intrinsic differences, I found that scale matters, a lot.  The more users of the application, the more weight is placed on various aspects, which affect the decision to which is the “better” route.  In cases where the total number of users is less than 5, Online Processing works very well.  But when the number of users rises above 10-12 it’s much better to go with Offline Processing, and greater attention being placed on proper resource allocation (server hosting, memory, network, processing, services management, etc.)

Browser Issues

For some of the features it matters what browser is being used.  For example, some of the Client Tools features, like “WinRS Command Console”, work best with Internet Explorer or Edge, due to scripting and security constraints.  But this only matters when using the Online Processing model for instantiating remote procedure calls.  For Offline Processing, the browser really doesn’t matter.

Resistance to Change

One of the more interesting aspects of this project have been the many discussions with colleagues about “why the lack of interest?”  Keep in mind, that I’m not a salesperson, so my motivations around this question are entirely different, and much more analytical than desperate.  I don’t harbor nearly as much sentiment over customer numbers as a sales rep would.  My motivations are around solving problems and filling a gap in needs.  So the question for me is “what is this not providing?”

Some potential sources of resistance (or aversion):

  • Contentment.
    • They are comfortable with the existing console and console deployment/support/maintenance model.
  • Reliance on the ancillary local PowerShell module.
    • The majority of console users are also using the local PowerShell resources.  Which in a scaled environment should be server hosted and possibly even leverage PowerShell Web Access.
  • Trust.
    • As in, “we don’t know this ‘Skatterbrainz’ asshole, so why take a chance?
    • Interesting side-note to this is to contrast the Linux/FOSS community with the Microsoft/Corporate community when it comes to trusting funny-sounding names on products or vendors.
  • Third-party aversion.
    • Some shops just do not like introducing additional or peripheral products into their process model.  Sort of like the showroom car customer who avoids adding aftermarket custom parts.

Finally, the Cloud

One thing I arrived at was the rolling of the dice regarding Azure, EMS and Intune.  It doesn’t take a scientist to see the writing on the wall regarding the future of Configuration Manager and Intune.  We can already see what’s happening with Orchestrator and SMA/Azure Automation, and SCOM with OMS.  Microsoft has made it clear that supporting different code platforms for enterprise product lines is not their game plan.  That one platform is Azure (or Azure Stack).

In fact, I’m really surprised that during the course of developing Windows 10, they weren’t already working in parallel to put a fork in Configuration Manager and build-out Intune to fill that gap.  Some still think that was unreachable, but when you sit down and diagram it out from an App-Dev perspective (my background), it’s not that complicated.  Cloud service, with in-prem site roles and agents.  Gee, sounds like OMS doesn’t it?

Intune only needs to add hybrid payload delivery in order to match ConfigMgr from a nuts-and-bolts level.  What does that mean?  It means solving the data egress considerations by placing the delivery footprint on-premises. Flipping the “cloud DP” concept on its head.

The concept of sites and site boundaries isn’t that hard to replicate from an outside-in model like Intune to on-premises.  They use a quasi approach for determining on-prem now, via the HTTP request channel.  I’d spew forth more details of how this could work, but I’d be stupid to do that without a job offer.  And besides, I’m sure they’re already way ahead of me on this.  I’d be shocked if they weren’t.

So, coming around full-circle, what I’m really saying is that spending a ton of effort on putting a web interface on Configuration Manager, or even Active Directory, is like building a new steam engine. Its days are numbered.  Time to move on.

Conclusion

This doesn’t mean the end of CMWT actually.  It just means no more trying to make it a viable standalone product.  I will continue using it for customers who need what it has to offer.  But the “open source” experiment is now complete.  Build 1603.21 will no longer be publicly continued or improved upon.  That is being pulled off of GitHub and placed in archival with a link from the “Downloads” page of this site.  Privately, I will continue on further development.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s