This post is to inform you about a bug in GCC that may cause memory (or other resource) leaks in your valid C++ programs.

One of the pillars of C++ philosophy is what we often call RAII: if you use classes to manage resources, use constructors for allocating resources and destructors for releasing them, the language makes sure that whatever happens, however you use such classes the resources will get properly released. We can easily test this guarantee by a fake class that logs resource acquisitions and releases:

struct Resource { explicit Resource(int) { std::puts("create"); } Resource(Resource const&) { std::puts("create"); } ~Resource() { std::puts("destroy"); } };

Whatever reasonable program we write (excluding the situations where you use raise , longjmp , exit , abort , etc., or when we cause std::terminate to be called) we expect that "create" and "destroy" are output the same number of times.

This is the contract: I take care that my classes correctly manage resources, and the language takes care that the resources will always be managed correctly regardless of the complexity of the program. This even works for such complex situations, you might not even have thought of:

Resource make_1() { return Resource(1); } Resource make_2() { throw std::runtime_error("failed"); } class User { Resource r1; Resource r2; public: explicit User() : r1{make_1()} , r2{make_2()} // what if make_2() throws? {} };

Consider what happens when make_2() throws when executing this constructor. r1 has already been constructed (resources acquired), but object User has not been created yet, and it will never be (because constructor will not run to a successful end). This means that destructor of User will never be called either. But the language is still required to call the destructor of any sub-object that has been successfully created, like r1 . Thus, r1 ’s resources will nonetheless be released, even though no object of type User was ever fully constructed.

You might have not even heard about this guarantee, but it still works to your advantage, preventing memory leaks.

But in one situation GCC will surprise you: namely, when you initialize a temporary using aggregate initialization. Let’s change our type User a bit, so that it is an aggregate:

struct User { Resource r1; Resource r2; };

It just aggregates members. No constructors, but we can still initialize it with aggregate initialization syntax:

void process (User) {} int main() { try { User u {make_1(), make_2()}; process(u); } catch (...) {} }

If you test it, it works correctly: the number of constructor calls equals the number of destructor calls, even though make_2() throws and makes the situation complicated. But u is an automatic object. If we change the example, and create a temporary User instead:

int main() { try { process({make_1(), make_2()}); } catch (...) {} }

This is where the bug manifests. Member r1 is initialized but never destroyed. Admittedly, this is a rare case: it requires an exception in the middle of initialization, a temporary and an aggregate initialization. But usually, leaks manifest in the face of exceptions. And the fact that it is rare makes you less prepared for it.

Here is a full example:

#include <cstdio> #include <stdexcept> struct Resource { explicit Resource(int) { std::puts("create"); } Resource(Resource const&) { std::puts("create"); } ~Resource() { std::puts("destroy"); } }; Resource make_1() { return Resource(1); } Resource make_2() { throw std::runtime_error("failed"); } struct User { Resource r1; Resource r2; }; void process (User) {} int main() { try { process({make_1(), make_2()}); } catch (...) {} }

You can test it online here. It is present in GCC 4, 5, and 6. For a more real-life, and somewhat longer, illustration of the problem, see this example provided by Tomasz Kamiński.

A bug report for this already exists.

Maybe your program already leaks because of this surprise?