Danny Kalev: In the past few years, you have supported several features that aim to improve performance: SCARY iterators, noexcept, and implicitly-defined move constructors. Some committee members were opposed to these proposals because they feared that they might compromise code security and prolong the C++0x standardization process. Is performance still so important these days? How much risk should the committee take with features that haven't been fully tested before?

Bjarne Stroustrup: Certainly performance is important to C++. C++ is disproportionally used where performance really matters. It is performance (as well as flexibility) that gives C++ such a massive presence in high-end systems (e.g. Google, Amazon, and Amadeus), in embedded systems (e.g., cell-phones, cars, and planes), and browsers and virtual machines (e.g. Chrome, V8, Havok, and Hotspot).

There seem to be standard ways to oppose anything new. Fear of security problems is part of that arsenal, but I see nothing in the proposals that I have supported that worsens the problems of C++ vis a vis security. Of the features you mention, SCARY has no security implications whatsoever - it is simply a technique for implementing algorithms with less overhead than is possible with over-constrained nested types. In fact, it is one of the two major current ways of implementing iterators for standard containers - the proposal was simply to require the more efficient and more flexible alternative to be consistently used by implementers. Even then, SCARY failed because many felt, at least in this case, that the committee should not limits the choices traditionally available to implementers. However, I expect that the data is sufficiently convincing that soon every standard library implementation will support SCARY.

noexcept is if anything safer that its C++98 alternatives (use of exception specifications or home-brew error-handling schemes) as well as faster.

"How much risk should the committee take" is a big and difficult question as is "what kind of risks should the committee take?" Essentially every new feature or library implies some risk: it could be risk of breakage of existing code, risk of confusion, risk of damaging a business by eliminating the need of a product, risk of breakage of tools, risk of invalidating educational materials, etc. Making no changes would seem to be 100% safe, but would lead to a stale user community, inferior support for newer programming techniques, and eventually lead to billions of lines of code being rewritten at enormous cost. Doing nothing is not risk free. My opinion is that doing nothing guarantees failure in the medium term, implying that to minimize risk, we must take many calculated risks.

I don't think that any significant feature can be completely tested - that would involve deploying it for a few years to a diverse community of thousands. Such an "experiment" would look more like an attempt to force a dialect on the community as a whole or as an attempt to create a lock-in mechanism. We have to make do with smaller experiments, consideration of many alternatives, lots of discussion, and work on the standard text.

I think that much debate about risk is misdirected. People mostly worry about breakage of old code and implementability. Typically the greater danger is poor design. It is really hard to ensure that a new feature is sufficiently general and sufficiently easy to teach, learn, and use. Fear of novelty often leads to overly timid language extensions with overly elaborate syntax and/or semantics. We need relatively more discussion about design and more work on use cases, but most people seem to be more comfortable discussing technical details and implementation techniques.

Danny Kalev: Let me pin down the issue of calculated risks: should there be a different standardization process for isolated features of minimal interaction with other features (say class member initializers), as opposed to pervasive features that affect almost every aspect of C++? Allegedly, the former can be excised from the language rather easily should they fail (e.g., exported templates) whereas the latter are less flexible -- every small modification affects the entire language, libraries etc. Or is it too naive to assume that such a division is possible?

Bjarne Stroustrup: It's hard to pin down risk. People really do differ in what they consider to be a risk and in their tolerance of risk.

It is hard to cleanly separate features into "localized" and "pervasive." The ideal is for a feature to interact cleanly with essentially every other feature in the language. In many cases, the alternative is duplication of functionality. Ideally, we have only one notion of name lookup, only one notion of expression, only one notion of scope, only one notion of initialization. Whenever we succeed at this ideal, we get a minimal language and any change to that feature potentially affects every other feature in the language.

It is rather similar for standard libraries: Ideally the standard containers should be used by other standard library components to hold data and the standard algorithms should be used in the implementations of other library components. When we see alternative container libraries used or libraries managing data through direct use of free store and pointer manipulation we are seeing forms of failure.

So, my conclusion is that you can minimize risk relative to a language feature or standard-library component only by not building upon it within the language or standard-library. However, by doing so you maximize replication/redundancy in the standard, maximize the size of the standard, and could come close to maximizing the implementation and learning effort. To use a different terminology, we want the standard to be strongly cohesive and that makes a loose coupling of different parts of the standards process essentially impossible. In particular, in the standards committee, we always need more people who care for the whole language (rather than just the subset used by their favorite developer community) and who understand the whole language (at some suitable depth).

Danny Kalev: Let's talk about noexcept, a feature that is in the FCD. C++ programmers might wonder why noexcept is needed when there's already a throw() specification to designate a function that shouldn't throw (let's ignore for a moment the recent deprecation of exception specifications). What is the main advantage of noexcept over throw()? Considering that the static checking of noexcept is limited, doesn't this feature introduce new code security risks?

Bjarne Stroustrup: No, noexcept does not open a security hole; rather compared to throw(), it closes a few. CERT supported noexcept as approved by the committee. There are people who want exception throws statically checked. I'm not among those: No language has succeeded in providing a system of static checking that is not crippling for large systems, inefficient, or easily bypassed. The exception specifications of C++98 were a compromise design that we would have been better off without. They are now deprecated, so don't use them. They lead to efficiency problems and surprises. noexcept addresses the one case where the exception specifications sometimes worked well: simply stating that a function is not supposed to throw. In C++98, some people express that by saying throw(), but they can not know whether their implementation then imposed a significant overhead (some do) and might end up executing a (potentially unknown) unexpected handler. With noexcept a throw is considered a fatal design error and the program immediately terminated. That gives security and major optimization opportunities.

The availability of noexcept should lead to heavier use of exceptions and to safer code.

Danny Kalev: The difficulties associated with the design of some C++0x features including rvalue references, lambda expressions and of course concepts lead critics to claim that C++ is too old and inflexible. Is there a grain of truth in this claim? Will there come a time when you decide that C++ can no longer be extended and improved, and that a new programming language is required instead?

Bjarne Stroustrup: I more often hear the claim that C++ is too flexible and of course that it is too large. New languages tend to be simple because they don't yet serve a large community. All languages grow with time. Of course it is harder to modify a large, old, and massively used language than coming up with something new. However, most new languages die in infancy and many of the new simple ideas turn out to be just too simplistic for real-world use. Adding to C++ is difficult and the process to get a new feature accepted is typically most painful for its proposer. However, once accepted, the new feature can have major impact on a large community. If I didn't want to have an impact on the world, I could try to get my intellectual stimulation through crossword puzzles, writing fiction, or designing a toy programming language.

Of course, I dream of designing a new, smaller, and better language than C++. But each time I have looked at the problems (to be solved by a new language) and the likely impact of the new language, I have decided that most of what could be achieved through a new language could be done within C++ and its standard library. The odds of making a positive impact on the programming world is - for me at least - much better through the tedious route though C++ than through the design, implementation, and popularization of a new language.

Danny Kalev: Regarding Unicode support, the C++0x standard includes char16_t and char32_t_as well as u16string and u32string to work with UTF16 and UTF32 encoded Unicode strings. However, the standard library doesn't support these in streams. For example, there is no u16cout or u32cout. I'm wondering, how can we use char16_t strings and write them to standard output?

Bjarne Stroustrup: Obviously, we ought to have Unicode streams and other much extended Unicode support in the standard library. The committee knew that but didn't have anyone with the skills and time to do the work, so unfortunately, this is one of the many areas where you have to look for "third party" support. There are libraries "out there" with good support for Unicode. For example, the Poco libraries "for building network- and internet-based application" (http://pocoproject.org/index.html) is available for download under the boost open-source license. There is also Unicode support (somewhere) in the Microsoft C++ support libraries.

It is unfortunate that something as fundamental as Unicode library support is not in the standard library, but in general, we have to remember that most libraries are not and can't be in the standard library. My C++ page contains links to many libraries, to collections of libraries, and to lists of libraries. One estimate is that there are over 10,000 C++ libraries "out there" (both commercial and open-source). The problem is to find them and evaluate them.

Danny Kalev: Finally, what are your New Years' resolutions?

To get C++0x formally approved as an ISO standard.

To produce a good first draft of The C++ Programming Language (4th Edition).

To spend more time with my grandchildren.

To have at least one interesting new technical insight.

Related Articles