In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming . But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.

Oh foolish, foolish child, that wots not of the Rule of Technical Greed.

Now, to be fair, the Rule of Technical Greed is a name I just made up. But the underlying pattern is a well-established one from the earliest beginnings of computing.

In the beginning there was assembler. And programming was hard. The semantic gap between how humans think about problems and what we knew how to tell computers to do was vast; our ability to manage complexity was deficient. And in the gap software defects did flourish, multiplying in direct proportion to the size of the programs we wrote.

And the lives of programmers were hard, and the case of their end-users miserable; for, strive as the programmers might, perfection was achieved only in toy programs while in real-world systems the defect rate was nigh-intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the designers and advocates of higher-level languages. And they said: “With these tools we bring you, the semantic gap will lessen, and your ability to write systems of demonstrable correctness will increase. Truly, if we apply this discipline properly to our present programming challenges, shall we achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And compilers were adopted, and for a brief while it seemed that peace and harmony would reign.

But it was not to be. For instead of applying compilers only to the scale of software engineering that had been accustomed in the days of hand-coded assembler, programmers were made to use these tools to design and implement ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of structured programming. And they said: “There is a better way. With some modification of our languages and trained discipline exerted in the use of them, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported structured programming and its discipline came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, sweet birdsong beneath rainbows, etc.

But it was not to be. For instead of applying structured programming only to the scale of software engineering that had been accustomed in the days when poorly-organized spaghetti code was the state of the art, programmers were made to use these tools to design ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of systematic software modularity. And they said: “There is a better way. By systematic separation of concerns and information hiding, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported modularity came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, the lion lie down with the lamb, technical people and marketeers actually get along, etc.

But it was not to be. For instead of applying systematic modularity and information hiding only to the scale of software engineering that had been accustomed in the days of single huge code blobs, programmers were made to use these tools to design ever more complex modularized systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though now greatly improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Are we beginning to see a pattern here? I mean, I could almost write a text macro that would generate the next couple of iterations. Every narrowing of the semantic gap, every advance in our ability to manage software complexity, every improvement in automated verification, is sold to us as a way to push down defect rates. But how each tool actually gets used is to scale up the complexity of design and implementation to the bleeding edge of tolerable defect rates.

This is what I call the Rule of Technical Greed: As our ability to manage software complexity increases, ambition expands so that defect rates and expected levels of technical debt are constant.

The application of this rule to automated verification and proofs of correctness is clear. I have little doubt these will be valuable tools in the relatively near future; I follow developments there with some interest and look forward to using them myself.

But anyone who says “This time it’ll be different!” earns a hearty horse-laugh. Been there, done that, still have the T-shirts. The semantic gap is a stubborn thing; until we become as gods and can will perfect software into existence as an extension of our thoughts, somebody’s still going to have to grovel through the protocol dumps. Design for debuggability will never be waste of effort, because otherwise, even if we believe our tools are perfect, proceeding from ideal specification to flawless implementation…how else will an actual human being actually know?

UPDATE: Having learned that “risk homeostasis” is an actual term of art in road engineering and health risk analysis, I now think this would be better tagged the “Law of Software Risk Homeostasis”.