wonka2

Let me start by saying this:  I am not an expert programmer.  There, I said it.  In fact, I’m not going to say I’m an “expert” in anything, aside from being me.  Little old, dysfunctional, funny-looking, me.  I think there are very few real experts who would blatantly claim the term actually.  Most of those I’ve met are too humble to assume that mantle, even if everyone on Earth would argue otherwise.  Okay, enough said.

I’ve been writing software code since roughly 1988, even if it was very remedial level at the time.  Technically, I wrote my first program in 1974 at the NASA-Langley Research Center, after being randomly picked from a primary school classroom by a bearded NASA scientist, along with six others, escorted to a bus and drive over to a classroom to learn how to enter FORTRAN code statements to print renown cartoon characters on a big IBM dot-matrix printer.  I got to do the Pink Panther.  But I didn’t touch a computer again until the mid 1980’s.

I’d say by 1992-93, I was picking up enough momentum to start paying attention to things like format, documentation, debugging and so on.  Those were the fun days of software development.  Before the bean counters pried the locks open and entered quietly to apply metrics and bureaucracy.

Back then, hardware was a premium, so there was a significant and determined effort made to instill a sense of refinement in the minds of programmers.  That refinement meant respecting the valuable limits of hardware resources.  Memory, and processor cycles, were the most cherished at the time.  We hadn’t yet begun to stress the storage mediums (aside from sheer data volume anyway), and networking I/O was still not front-and-center.

As time passed, hardware has improved, and scaled, incredibly faster than software, but not as evenly as it had before.  So now with “average” business programming, the emphasis has been lessened on the hardware constraints and placed more on other areas of value: version control, testing, analytics, and so on.  Documentation is still valued, but in different ways, such as extraneous, meta-prefixed realms, beyond the source code itself.  In a sense, applying a Boyce-Codd approach to normalizing code from logic from documentation and so on.

But, some things seem to be getting lost in the rush to make a new application or service.  The tech schools are busy focusing on teaching product-based skills, to directly bolster resume potential, rather than infuse brains with deeper conceptual, theoretical and historical knowledge.  But I digress (I do that a lot obviously).

On to the meat and potatoes:

  1. Document your code.  Not just a little, a LOT.  Don’t write a Mitchner novel or a treatise, but do make it readable.  It should be written for another programmer, not a manager or someone’s grandmother.  The audience should be expected to have a similar background as yourself.  That said, if your employer is a complete dick, write it to be technical precise but sufficiently obfuscate and stipulate with terse rationality. 🙂
  2. Close performance gaps.  If you open a connection to a resource, strive to situate the open and close statements in the shortest proximity.  Keep those connection sessions as brief as possible.  It’s not just for your precious code, but for the resource being taxed on the other end.
  3. Don’t assume anything.  Test every condition as though you expect it to be intentionally broken.  Think of the dumbest user you know and how they’d try to misuse your application.  Now, code for that scenario.  Also, for situations that involve pipeline or intercedence patterns, be careful about overlooking places where an error or unexpected result could creep into the pipeline without being properly handled.  Wrap try/catch blocks or use other exception handling religiously.  Use strongly-typed values, parameters and constrained functions wherever possible.
  4. Never stop refactoring.  If there’s any repeated chunks of code, combine them into a function, or some other reusable entity.
  5. Never stop being curious.  Just because the example you’re testing uses one approach, try others.  You never really understand the “how” and “why” of code patterns until you experience their real behavior (and punishment).  Sometimes it’s better to do it wrong the first time than to hit the nail perfectly.  Otherwise, you may not really understand why the “wrong” approach is “wrong”.

Most of these I’ve picked up from years of pulling my own hair out, but equally as much from my past and present colleagues, college professors, and hours and hours of web searches.  I’m still learning.  Every skilled programmer is still learning.  There is no one goal line to cross for every developer.  You have to keep one eye on the job in front of you, and the other on the horizon.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s