An Ethos for Sustainable Computing
An essay to assert what I have learned from my research about the fundamental nature of computers.
This was originally penned on the 22nd day of May, 2020. It was a Friday. It has been reprinted here for posterity and/or austerity (whichever you prefer). Enjoy.
Preface
Nearly two months ago, I hit a major milestone in my work with understanding the fundamental problems we all face with computing. I published the work in essay form as The Injustice of Complexity in Computing, applying a smothering of pathos and a smattering of ethos in a case made I made to defend ‘simple computing’. I later appeared in a video interview on Justin Murphy’s Other Life podcast, discussing this essay live in detail. I have known since shortly thereafter that, as proud as I am of the piece, it is lacking in several dimensions that could give it the strength it deserves. Earlier tonight I sat out to explain why I chose Make for my work developing ANSI C projects, and realised I need to be able to reference this ‘more complete’ foundation for argument. This is a proper attempt to codify my prescription for how we approach general computing in programming.
The material, irreducible, and lovable
Let’s start with an easy example of complexity. Being able to display millions of colours on a computer screen is possible thanks to the implementation of 24-bit RGB colour. Further, being able to craft complex images and show them at high refresh rates in real time is thanks to the computing power of modern CPUs and GPUs.
The darkness behind the closet
The machinations behind this event—displaying a beautiful picture on a computer screen—are an enigma to anyone. As Peter Welch once said, “Not a single living person knows how everything in your five-year-old MacBook actually works.” A real world example will vary, but generally it will involve several massive layers of highly generalised abstractions.
Detailing the darkness
Take a program written in Visual C♯ that does graphics acceleration. When the programmer builds this, Visual Studio invokes a state-of-the-art optimising compiler that produces bytecode from source code. This bytecode is then packed into a PE-format executable, which contains a native code stub that calls into the .NET system libraries, which then reach back into the program and translate said bytecode into native instructions your CPU can understand. All of this happens before anything in the actual program, as written, is executed. From here, hidden startup routines are embedded that initialise the runtime environment, providing countless functions, data types, and other definitions automatically to your code. This runtime also single-handedly deals with the very complex task of memory management, using a garbage collector. Once this is all set up, the program’s bytecode is scanned for its module dependencies, and those are imported as well. For C♯, these will mostly consist of wrappers over native libraries, but also include genuine C♯ libraries, which are imported and dealt with as well. This entire process up to this point also repeats, recursively, with every EXE and DLL the runtime touches. Once this mountain of code is brought into view, the program you wrote may begin to execute.
Dimensionality of complexity
There are as many additional dimensions to a program’s complexity as there are steps enumerated above. That example simply goes over the typical startup routine of a program written in a Very High Level Language. There is an entire operating system underneath that. That OS is an amalgamation of countless protocols and services that are often individually quite complicated. USB-C and Thunderbolt are new wave ports popular with executive PCs, but under the hood they are among the most complicated protocols ever made. Of course they are, since they support most of the protocols (Ethernet, USB data, DisplayPort, audio) they intend to replace. The OS has to understand every detail about all of that. And then when it does work with the internet on its level, it’s using a Chromium shell that weighs 100MB by itself. That program implements the entirety of all the major web standards, documents which are hundreds of pages long apiece, and which have tons of seemingly pointless complexity about them. Then that program deals with websites themselves, which are often single-page applications written with huge piles of JavaScript, sometimes weighing in the megabytes alone, which need to be compiled down to native code to run. Then there’s CDNs for delivering that code and the website’s assets. We haven’t even gotten into Cloudflare. That’s an entire billion-dollar business that bills itself as the “backbone of the web” because they built a startup around the fact that HTTP headers are made of text, and computers are performatively bad at reading text. Recently Google and friends have been working on new standards for HTTP, punting it onto the UDP protocol for performance benefits, and adding all of the complexity of that stateless protocol onto applications to save some bytes instead of working on TCP or fixing the awfully messy infrastructure underneath.
It seems quick
All of these processes appear to happen quickly because of nothing more than the sheer computing power of modern hardware. In truth, it is but a sliver of the massive complexity happening at every instant with almost every program you use. Piling on these things is what makes a modern computer seem slow. They’re self-compounding, too. It appears as if all programmers know how to do is add new technologies, when the public needs some to be taken away.
The human cost of complexity
However, the unbearable costs of complexity are not ultimately found at runtime. Instead, it is found in the maintenance and operation of the software. Programmers have indulged themselves over the past several decades about the extent of their ability to deal with the complexity of code. There is a certain spirit behind the resilience and persistence in fixing things. As human as this valiant act is, it is equally human for its fallacy. The prevailing approach to dealing with bugs is untenable.
The people who already know
Security researchers and systems administrators are some of the specialists acutely aware of this untenable problem with software, as they are the ones to deal with its misfortunes day-to-day. They understand with a great intuition the ugly, unavoidable truth that most software is more complicated than it is worth.
It’s bad news for an establishment
This untenable complexity is bad news for shareholders. It is bad news for everybody invested in Web 3.0 in any way. It is bad news for malware authors, and APTs that rely on the microbial bloom of security flaws to conduct their nefarious business. In a completely self-interested world, few, if any, would have the personal incentive and resource capacity to deal with this reality about software.
What’s the worst that could happen?
Nonetheless, it is certainly untenable, as there are worse possibilities than loss of profits or political favour at hand. If this trend continues, time will carry on to a point where no one knows how to program in the old ways anymore. People will go to universities and be taught how to use VHLLs like C♯ and Python, and given cursory lessons about C that teach them nothing particular to the language. The people who have experience writing in assembly and talking to baremetal hardware will dwindle, and after a point will be placed out of reach of conventional markets for hiring. It is possible that such knowledge could be forgotten entirely, leading us into a ‘Bronze Age collapse’ of computing that we won’t notice to retreat from until it is too late.
There is a solution
Instead of a decline or collapse, I offer a solution that restores computing to its optimal capacity. This method will bring the engineering of computing up to par with all other kinds of engineering, including civil, nautical, aerospace, and so on. This method will make software understandable to programmers. This method will beckon the return of what Peter Welch described as “good code”, in a capacity it was never properly given before. This theory forms the framework for demonstrating why simpler code must ultimately prevail in the world, including in markets and in realpolitik.
An assertion: They are machines
Computers are machines, and are merely machines. Machines have specific operating constraints which need to be adhered to, and their degradation or malfunction can be identified with certainty upon examination and diagnosis.
What was once simple
Until the late 1990s, computer architecture was simple enough for one engineer or a small team of engineers to reason about the behaviour and heuristics of. CPU design was generalistic, core system components followed vastly simpler protocols than they do today, and the software typically running on the machines were not vastly more complex than the machine itself.
The material and the immaculate
As the new millennium approached, a split in complexity occurred. On one hand, there appeared material complexity. This material complexity was actively sought after by users, and declared valuable by businesses; it included monitors with millions of colours, CPUs with ever-higher clock rates, CPUs with multiple core complexes, and specialised processors like video cards. On the other hand appeared immaculate complexity. This complexity was ‘immaculate’ because it was demanded solely by the engineers who created and implemented it, as they all believed it was much needed and necessary, and no one who could understand the technicals they spoke of could argue otherwise. These included things like IPv6, and btrfs
. Immaculate complexity proliferated at a pace exceeding material complexity, resting on ever-more-shaky justifications and explanations that no one was bothered to put into context.
The irreducible and the superfluous
In the world of computing, there are complexities that are irreducible, and complexities that are superfluous. The proliferation of material complexity beckoned that which was irreducible, while the flourishing of immaculate complexity brought about all that was superfluous.
The human brain only apprehends complexity
Humans must make the conscious decision to practise differentiating the material from the immaculate in the code they deal with. Intuitively, our minds only try to deal with what is in front of us. It is hard to model the architecture of how all the systems developed work together, but this is the very thing which must be done. Some companies understand this endeavour, but they hit market-imposed limits much more quickly than their developers can work their way out of. Most products cannot live without their apps, which depend on oceans of operating system code, libraries, and frameworks working underneath. In fact, it is unrealistic to deploy any application for any significant modern operating system without accepting the absurdly high burden of complexity they impose. Everyone is disincentivized from even trying to write good code, because there is no benefit when something underneath that is hopelessly complicated can crop up and screw with the program. You simply cannot devise a system that is well-behaved through and through.
A new hope: DOS
It is possible to go back to basics. This is not an easy task. Old operating systems, such as MS-DOS, provide PC computing with far less bells and whistles than normally ship with an operating system today. However, these systems were capable of a surprising amount of material complexity that went mostly unrealised in their day. VGA systems can display a surprisingly high number of colours using the undocumented Mode X, and the x86 can be put into unreal mode to achieve uncompromised control and ease of programming in assembly. Save the complexities of the internet, the vast majority of programs would work well under these constraints.
To do what DOS could not
There will be some programs that, by the nature of their purpose, will not be sated by emulated DOS machines running unreal mode X. Graphics editing applications are one example. By beginning with a machine kept simple, it is possible to add the material complexity necessary to support the needed features, while taking care to avoid immaculate complexity that could gum up the works. This will often involve creating new hardware specifications, too. In any case, awareness and cognisance is key. Millions of colours and pen input are, by themselves, not incredibly complicated things. Everything else we are forced to live with in order to enjoy them today… are.
The human factor
Awareness and cognisance also needs to be used to moderate the human element of programming. Humans have hard limitations on how much objective logos they can process in a given time. Writing good code is harder than reading it. Some think that writing esoteric code is a show of knowledge because of the intricacies it displays, but it is really a show of naïvety because of how unintelligible it will be to most people. Most programming languages are, by design, more complicated than they need to be. C++ is a prescient example of this, particularly in contrast to its parent language, ANSI C. We must descend through all of the layers of our technology and remove immaculate complexity no matter where we find it. This won’t be convenient, but it must be done.
What we must do
In every aspect of software development, we must prefer the simple over the complex. We must be able to do this evaluation across domains, all up and down the software stack at hand. In order to maximise simplicity, we must gain the courage to make hard decisions about our technology and stick with them. We must abhor baseless, hypothetical-edge-case arguments in favour of clearly defined boundaries for the operability of our program. We must accept that we cannot implement and maintain something for “all systems” any more than we can do so for “all use cases”. The public will find an exclusive application more palatable than a perpetually broken one, so let us deliver that to them.
Again, it won’t be easy. But it will be worth it.
Hey! Thanks for reading. This one is a republishing, so it’s a free read, as before. I run this Substack to help break myself out of relative poverty and earn the white collar lifestyle I was not endowed with growing up. It’s $5.55/month to subscribe, or $55.55/year. That’s like the Interstella movie, or something. Think Daft Punk. Totally worth it.