$\begingroup$

Like any field with an active research community, Computer Science changes over the decades, sometimes quite drastically. If a practitioner doesn't keep up he or she will be left behind, not understanding the new work.

Decades ago problems tended to be a bit smaller in scope. How do I sort an array? Now they are massive. How do I scale a cloud environment and make it immune from failure and attack?

For this change in scope and scale to occur we need to learn to think at higher and higher levels of abstraction. It is hard to write complex programs in languages that permit only simple constructs.

Language research is still an active pursuit. The ACM still has a special interest group POPL interested in the Principles of Programming Languages, for example. The papers and discussions are deep and "modern."

There are many other ACM SIGs, some of which, say SIGMobile, couldn't have existed a couple of decades ago, prior to smartphones.

The other thing is that such a field grows so much that no single practitioner can grasp all of it after a certain point in time. In mathematics, for example, this point was passed around 1900. It has probably been passed in CS as well.

Regarding education, the changes in the field have also generated changes in teaching, both the content and the methodology. I can speak only for what happens in the US since all my experience is here. But an undergraduate CS major will get two things. First a broad education, including History, Philosophy, and the rest, including some writing. Second they will get enough education in the major field to either go on to a graduate degree or obtain an entry level position in industry. But they will normally never be deeply taught in any one aspect of CS (or Math, or Literature, or ...), nor broadly enough to say they know much about the whole field.

The Master's degree, on the other hand should, in a technical field teach students the things that every working professional needs to know. Again, this isn't necessarily very deep nor broad, but it has to be vaguely comprehensive for the working pro. It should also get you a bit closer to the ability to do research if it doesn't already include some research component. But usually the research required there is into what is known about a possibly new or arcane topic, rather than the creation of new knowledge in a field. That is for doctoral programs.

The effect of this is that undergraduate teaching now includes more options that were not studied in the past (big data, machine learning, ...) through elective subjects at the upper level. It also means that some things that were needed by working pros in the past (numeric algorithms, say) are not taught to the same degree as much of what was once hard is now captured in standard libraries. But methodologies also change, and now teamwork is much more used in the classroom than was typical a few decades ago.

The conclusion I draw from all of this is that students need to start higher up the learning tree if they are to get where they need to go. In my view, recapitulating the history of computation, starting with low level concepts and tools and building everything up incrementally is a mistake. There is also no need. Students can start at any level of abstraction and work upwards from there intensively. They make excursions down the abstraction hierarchy on occasion as some of that helps the understanding, but a complete understanding of the foundations of, say, IEEE Floating Point algorithms is only needed by a very few working pros. If you start the education, today, the same way it was done 20 years ago there will be no time to get to today's problems and solutions.

I used to tell my students that my job is not to teach them what I know, because much of that is now obsolete. My job was to teach them what they need to know. Not the same thing. One example is the hidden bit in IEEE Floating Point.