I had ideas on this subject, and I put them into a book 20 years ago. It's long out of print, but you can still get used copies on Amazon.

One simple answer to your question is as old as Aristotle: Nature abhors a vacuum. As much as machines have gotten faster and bigger, software has gotten slower and bigger.

To be more constructive, what I proposed was that information theory, and its direct relevance to software, be part of computer science education. It is only taught now, if at all, in a very tangential way.

For example, the big-O behavior of algorithms can be very neatly and intuitively understood if you think of a program as a Shannon-type information channel, with input symbols, output symbols, noise, redundancy, and bandwidth.

On the other hand, the productivity of a programmer can be understood in similar terms using Kolmogorov information theory. The input is a symbolic conceptual structure in your head, and the output is the program text that comes out through your fingertips. The programming process is the channel between the two. When noise enters the process, it creates inconsistent programs (bugs). If the output program text has sufficient redundancy, it can permit the bugs to be caught and corrected (error detection and correction). However, if it is too redundant it is too large, and its very size, combined with the error rate, causes the introduction of bugs. As a result of this reasoning, I spent a good part of the book showing how to treat programming as a process of language design, with the goal of being able to define the domain-specific-languages appropriate for a need. We do pay lip service to domain-specific-languages in CS education but, again, it is tangential.

Building languages is easy. Every time you define a function, class, or variable, you are adding vocabulary to the language you started with, creating a new language with which to work. What is not generally appreciated is that the goal should be to make the new language a closer match to the conceptual structure of the problem. If this is done, then it has the effect of shortening the code and making it less buggy simply because, ideally, there is a 1-1 mapping between concepts and code. If the mapping is 1-1, you might make a mistake and code a concept incorrectly as a different concept, but the program will never crash, which is what happens when it encodes no consistent requirement.

We are not getting this. For all our brave talk about software system design, the ratio of code to requirements is getting bigger, much bigger.

It's true, we have very useful libraries. However, I think we should be very circumspect about abstraction. We should not assume if B builds on A and that is good, that if C builds on B it is even better. I call it the "princess and the pea" phenomenon. Piling layers on top of something troublesome does not necessarily fix it.

To terminate a long post, I've developed a style of programming (which sometimes gets me in trouble) where