I’ve been programming in C++ for almost a quarter of a century now. I grew up, professionally, with C++, and in many ways, it grew up along with me.

For someone who is used to C++, even used to recently-standardised C++, it’s hard not to feel apprehension when looking at C++20. Modules, coroutines, ranges, concepts — these are all massive features that I don’t have real experience with yet and that offer completely new ways of doing things. There are also several medium-sized features which could turn out to have large impacts; for example the spaceship operator, or the further expansion of constexpr capabilities.

It’s very easy to react to all this by thinking, “It’s too much; it’s not baked yet; C++ is collapsing under its own weight.” And it’s natural not to trust something new, especially when I already know how to solve my current problems using current technology.

What I try to remember is: this isn’t a new situation, and this isn’t a new feeling. It’s not new to me, and it’s not new to the world. Many people have felt this way about different technologies over the decades, and very seldom has actual catastrophe been the result.

We felt this way about C, when we programmed games mostly in assembler. We felt this way about object-oriented C++ once we started trusting C. We felt this way about generic C++ and the STL after we’d gotten used to object-oriented C++. Some of us currently feel this way about C++11 and beyond.

Often this feeling is based on seeing new tech not (currently) performing quite as well as old tech for certain use cases. This is also not a new concern. People even thought the same about assembly language, preferring actual machine code! In her HOPL keynote (1978), Grace Hopper said:

In the early years of programming languages, the most frequent phrase we heard was that the only way to program a computer was in octal. […] the entire establishment was firmly convinced that the only way to write an efficient program was in octal.

It’s not wrong to feel this way because feelings aren’t wrong. But it’s also worth remembering that we’re always on the edge of technology that is in the process of proving — and improving — itself. Some things don’t stick around, but many do, and they get better, usually surpassing older tech in all sorts of ways, performance included.

We’re always learning how to do more. Betting against progress is seldom a good bet.

Much has been made of Bjarne’s 2018 paper, “Remember the Vasa“, and it might seem to validate these feelings of distrust for the new. But even that paper provides some historical context, recalling:

During the early days of WG21 the story of the Vasa was popular as warning against overelaboration (from 1992): Please also understand that there are dozens of reasonable extensions and changes being proposed. If every extension that is reasonably well-defined, clean and general, and would make life easier for a couple of hundred or couple of thousand C++ programmers were accepted, the language would more than double in size. We do not think this would be an advantage to the C++ community.

In fact, that’s exactly what has happened, maybe several times over since 1992, and yet the C++ community is thriving more than ever with conferences, podcasts, a healthy social networking scene, growing standards participation, and more.

In The Design and Evolution of C++, section 6.4, Bjarne mentions that “Remember the Vasa” became a popular cautionary tale at the Lund meeting (of June 1991). But he also says, in section 9.2.2.2,

[C++] was designed to provide a toolset for professionals, and complaining that there are too many features is like the “layman” looking into an upholsterer’s tool chest and complaining that there couldn’t possibly be a need for all those little hammers.

C++ is growing. Change can be daunting, but I think we’re going to be fine. And when one day a specific little hammer is just the right tool for the task at hand, I’ll be thankful that someone added that hammer to my toolbox.

Share this: Facebook

Twitter

Reddit

Email

