I need to brain dump for a moment. I’m overdue actually, given that my last few posts have been either stupid+goofy or dry code snippets. And since I’ve had a glass of Merlot, it makes me feel smarter, so I can’t waste the opportunity either. Let me start with a question:
Which would you consider to be “real programming”: writing device drivers in Assembler or C, or building web services with C# and JSON? Or is it, dragging graphical widgets onto a workspace and dragging connectors between them? Or is it talking to a computer and having it develop a function, process, system or application from your commands? Or is it customizing a ribbon link inside of an application?
A discussion a few months ago with a long-time colleague, and all-around uber kickass crazy-brilliant savant, revolved around “coding versus programming”… and also around beer. Okay, mostly around beer. The discussion kept coming back to the notion of whether a “developer” was really “writing code” when they drag a GUI widget onto a design workspace and drag some connectors to build a workflow, stopping occasionally to edit some attributes (parameters) along the way. Where is this magical “line” the delineates a real programmer, from a tinkerer?
Many analogies and metaphors arose.
Is a DJ making music when he/she scratches a disk to a back beat? Is it art when pasting torn magazine clippings onto a canvas board along with splashes of paint? Is it cooking, if you add your own meats and vegetables to a steaming cup of Ramen noodles? Is it acting if the character is a computer animation?
When is it creation versus manipulation?
The best we could surmise is that it somehow fits with the ethereal rationalization of what makes music “good”. A trained musician can listen to a song of almost any genre and proclaim it to be “good”, typically based upon this thing they call “sincerity’. I’m a musician, or was anyway, but I can very much sympathize with this assessment. I can listen to a song from a particular genre which I generally dislike, even vehemently, and still find a specimen that makes me say “that’s good work!” But how would you describe that to an AI receptor?
Subjectivity is the overriding factor here, obviously. But subjectivity is evil in the computational world. Whether it be mathematics, chemistry, astrophysics, or computer sciences. Digits are optimal. Which is why technofolk prefer .33 over 33%, and Pi is just some humorous anomaly in the world of absolutes. Too nerdy? Ha!
Ok, so, after spending a short time with Assembler back in the stone ages, I can very much appreciate the effort that goes into writing machine-level instructions via things as elevated as C. I can also appreciate class modeling in C++ or C# as being on par with registers and bit-level manipulation. I can also appreciate conglomerated polysystems, such as clustered services behind load-balanced web services with multiple endpoints. Multiple consumers of multiple providers. Parallel and neural processing algorithms as well, even though they sit much higher in the abstract hierarchy of machine linguistics.
SQL developers often roll their eyes at someone who proudly calls themself a “database developer” working with Microsoft Access forms and reports. To the former, it’s not really SQL. No deep introspection of normalization, relations, indexing and paramorphic schema operations. It’s just fucking around with forms (smacks glass off the table).
It’s taken me decades, but I can now begin to grasp what they’re all doing from a visceral point (okay, maybe that’s the wine again). Anyhow, where does this elusive boundary exist that divides programming from goofing around?
All this blabbering and I haven’t gotten any closer to finding this mystical thing. When does modification of a software entity become programming?
I promise, I’m going back to coffee for a while.