When new features are proposed to C++ it is desired that they do not introduce breaking changes. This is typically understood as:

Every program that used to compile (was well formed), continues to compile with the same semantics. A program that failed to compile (was ill-formed), can now be made well formed and assigned new desired semantics.

For instance, the following was an invalid C++03 program:

void feature(std::string s1) { std::string s2 = std::move(s1); // no std::move in C++03 }

Therefore, it is no harm when we make it well formed code in C++11 and assign it some useful semantics. This rule is not followed in 100% of the cases, but this is the idea in general.

However, even though it works in most of the cases, I believe that this criterion of a “safe addition” is not technically correct, as it fails to take into account an important fact: failure to compile certain programs is a useful, important feature, and if these programs suddenly start to compile, it can cause harm. In this post we will go through the cases where compile-time failure is considered a useful feature.

First, type system errors is so basic and common a feature that we may often forget how helpful it is. If we try to compile the following program:

void consume(int i); int main() { std::vector<int> v; consume(v); // ERROR }

We get a type-system error: one cannot convert a vector to an integer. What I am saying is plain obvious. This saved my day many a time. This is a feature of statically typed languages: the compiler informs you about type-system errors — rather than users. We may even appreciate it, but can it be called a feature? In the case like the one above, this would be artificial. It is an obvious consequence of the type system: if a function (like consume ) does not have a signature that works with the given type, then it doesn’t work. Period. But the picture looks different when you consider a library like Boost.Units, which performs a compile-time dimensional analysis. It works more-less like this:

Time t1(10.1 * seconds), t2(10.2 * seconds); Distance s1(12.0 * meters), s2(21.0 * meters); Time tX = t1 + t2; // ok Velocity vX = s1 / t1; // ok Time tY = t1 * t2; // ERROR: unit mismatch Distance sX = s1 + t2; // ERROR: unit mismatch

Such library can turn a unit mismatch (a term from dimensional analysis) into type mismatch (a term understood by the compiler). Now, this can definitely be advertized as a feature, and indeed this is a flag feature in Boost.Units. Every user of this library relies on the library that the last two lines should always compile-fail. If for some reason they should ever compile on some newer compiler, or library upgrade, this would be a serious bug. Because the goal of such library is to render compile-time (type system) failures.

Boost.Units is a spectacular use of the C++ type system, but there are more mundane cases where we have to make some non-trivial effort to make certain statements fail to compile. For one example consider the bug in Boost.Rational library that I described the other day. This is one of many bugs caused by implicit conversions. In short, this could be illustrated with the following example

struct Rational { int num, den; Rational (int n, int d = 1) : num(n), den(d) {} }; Rational r = 0.5;

The last line just works; and because it works, a normal user will assume that the result will be:

assert (r.num == 1); assert (r.den == 2);

But the result is different, because the meaning of this initialization is:

Rational r = (int) 0.5;

We never wanted this initialization to be valid. We never declared the constructor taking a double . It was injected there against our will; and now we have to go through an extra effort to make the conversion form double illegal.

Counteract implicit conversions

There is a couple of ways in which we can make the adverse conversion illegal:

Declare the converting constructor (from double ) private. Declare the converting constructor template and use enable_if to disable it for floating point types. Declare the converting constructor and put a static_assert inside. Declare the converting constructor as deleted.

Declaring the unwanted constructor private is probably the oldest and best known way of achieving the goal. It has certain limitations, though. A member function declared private is still a function, it is accessible by other member functions and friends. And we do not want them to use our function either. We can leave our function declared but not defined, which is slightly better, but still has some drawbacks. The problem is not found at compile-time, but only at link-time. If you are compiling a library, you will not be warned at all. Also, it is a hack: the message that the linker cannot resolve the symbol is not likely to help you identify the problem. And additionally, the hack with private function will not work for free (non-member) functions. It is not the problem in our example, but it is a problem in general.

Another solution, available even in C++03, is to use the SFINAE trick in the form of enable_if . The mechanism behind it was briefly described in this post. Whether enable_if will work or not depends on how we use it. If we just add another constructor template to our class, the trick will not work:

using boost::disable_if; using boost::is_floating_point; # define DISABLE_IF(C) typename disable_if<C, int>::type = 0 struct Rational { int num, den; Rational(int n, int d = 1) : num(n), den(d) {} template <typename T> Rational(T n, DISABLE_IF(is_floating_point<T>)) : num(n), den(1) {} }; Rational r = 0.5; // still compiles!

This is because of how enable_if mechanism works. If the condition is satisfied (or, in the case of disable_if , if the condition is not satisfied) the corresponding function is removed from the set of candidate functions. It will not be considered when selecting the best overload; but the process of selecting the best overload will continue, considering the remaining functions. If there is other one that matches (as in the case above), it will be used. The following, on the other hand, will achieve our goal:

struct Rational { int num, den; template <typename T> Rational(T n, T d = 1, DISABLE_IF(is_floating_point<T>)) : num(n), den(d) {} }; Rational r = 0.5; // fails as expected

Now, after removing the function from the candidates, there is no other candidate function left, and we achieve our desired compile-time failure. The only inconvenience we could experience now is that the failure message may convey too little information: “cannot convert double to Rational.” Not that bad in our case, but we could have wished for a better one: “Conversion from floating point numbers not yet implemented. Consider casting the argument to int or using function rational_from_double .”

In order to produce a custom message upon failure, we can use static_assert , or — if it is not available on your compiler — Boost.StaticAssert. Let’s give it a try:

struct Rational { int num, den; Rational(int n, int d = 1) : num(n), den(d) {} Rational(double n) : num(n), den(1) { static_assert(false, "conversion from floating point..."); } }; Rational r = 0.5;

This doesn’t work as expected. Static assertion fails the compilation in every TU that includes our header. Whether we try to convert from double or not, or whether we even try to use type Rational is irrelevant. This illustrates how static assertions work. They render a compilation error as soon as the condition in the assertion can be determined. In our case, we can determine the result as soon as we see the declaration of the class.

In order to get the thing right, we have to make sure that the condition cannot be determined while compiling the declaration of the class, but at the same time it should be determined upon an attempt to convert from double . This is where templates can help us again:

struct Rational { int num, den; Rational(int n, int d = 1) : num(n), den(d) {} template <typename T> Rational(T n) : num(n), den(1) { static_assert(!is_floating_point<T>::value, "conversion from floating point..."); } }; Rational r = 0.5; // fails-as-expected

Now, the Boolean value in the condition can only be computed when we instantiate the template with a particular T . Plus, we are in control of the error message. But the solution is not ideal yet. Some people in some contexts want to test if one type is convertible to another type with is_convertible type trait. They expect the following tests to pass:

using boost::is_convertible; // or std:: static_assert(is_convertible<int, Rational>::value, ""); static_assert(!is_convertible<double, Rational>::value, "");

If you try it, you will find that the second assertion fails: even though converting double to Rational results in a compile-time failure, double is convertible to Rational ! So, you may ask what it really means “to be convertible” then. For the purpose of is_convertible and also for the SFINAE mechanism, to be convertible means that there exists a conversion function form source type to the destination type that we can select in the overload resolution process: it is irrelevant if this function is only declared (and not defined), or if it has a static assertion inside or if it triggers other template instantiations inside that would render an error. We only check if the overload resolution would succeed.

The fourth option is to use a C++11 feature: deleted function:

struct Rational { int num, den; Rational(int n, int d = 1) : num(n), den(d) {} Rational(double n) = delete; }; Rational r = 0.5; // fails-as-expected

The intuitive meaning of such declaration with keyword delete is “I do not want this function”. But to be more precise, a deleted function participates in the overload resolution: when looking for the best matching function overload, it is treated as a normal function; but when it is selected, this results in an ill-formed program, and in the case of SFINAE, in a substitution failure, and in the case of is_convertible , in returning a negative response. So, this is somewhat different than enable_if solution. The deleted function is “sticky”: it wins the overload resolution and therefore immediately triggers an error. The error message is slightly better now: “cannot convert double to Rational because the selected constructor has been explicitly deleted” — at least it gives an indication that someone has made an effort to prevent this conversion, and there is single place in the code responsible for the failure: in this place you can put a comment with an explanation.

Thus, as we could see, deleted functions offer a certain advantage over functions with a static_assert inside; but the latter has one advantage over the deleted functions: with static_assert you can display an arbitrary, clear message to the user. There appears to be no ideal solution at the moment. One has been recently proposed, though, in N4186. If it were to be accepted, we would be able to use it like this:

// NOT IN C++ (YET): struct Rational { int num, den; Rational(int n, int d = 1) : num(n), den(d) {} Rational(double n) = delete("conversion from floating..."); }; Rational r = 0.5; // fails-as-expected

This works like a deleted function, except that when the compiler error is generated, like in the case of static_assert , the message that we provide is required to be displayed along.

Testing the feature

Better or worse, all the above solutions achieve their goal: they result in a compilation failure when someone tries to inadvertently pass a double where Rational belongs. We may want to call it a “negative feature”: its value is in that some programs fail to compile — and this is desired. Like any other feature, we want to test it with some sort of a unit test. In C++11 to a great extent it is possible with the extended SFINAE rules. But to make the task more difficult (and more real-life), we want to test it also on C++03 compilers. We also have to consider the situations where static_assert is used to trigger a failure, in which case type traits or SFINAE techniques will not work. This is the task I am facing when maintaining a Boost library.

Such a negative feature is not testable within C++. There is no function that tries to compile the program and when it fails (for any reason) it “returns false”. We have to test it from outside a C++ program. The way it is solved in Boost is this. I write a small (minimal) program (ill-formed) that illustrates what construct I expect my library to turn into a compiler error. With a build tool, I run a compiler and inspect the compilation result. If compilation fails, the build tool reports success; if compilation succeeds, the tool reports failure. Typically, there would be a number of such to-be-failed programs, and I want to test all of them. For instance, in our example, we could decide to also prevent the following use, which as of now is still legal:

Rational r(0.5, 0.4);

You can see this technique used in Boost regression tests results. See here for the results of Boost.Optional. An example of such expected-to-fail program can be found here.