In the previous post we have sketched out the view that error handling is about expressing the success dependency between operations. I have also indicated the guideline “destructors only for releasing resources”. In this post we are going to see what it means in practice. We will try to save some data to a file using std::ofstream .

Here’s the situation. We are writing a text processor application. We are going to write function save() that saves user’s text document into disk. That is, we say, “this function saves data to disk”, but we really mean, “this function either saves data to disk or it reports failure.” Failure to save work in the word processor is not that big of a deal provided that the program informs the user about it. If I push button “save” and program says, “no space on disk”, I know that I cannot trust the program and have to take an action myself: clean up my disk, or copy the contents to clipboard and send it over mail. The worse thing that can happen is when I push button “save” and program behaves as if it saved my work whereas nothing was saved. I will now believe that everything is fine and continue working and even more of my work will likely get lost.

Let’s start with the most natural thing to do:

void save() { std::ofstream f {"work.txt"}; f << provideA(); f << provideB(); } // flush in destructor

We did not put any explicit error checking code because we assume that exception handling mechanism will do just the right thing. But it will not. What happens if the file for some reason cannot be opened? No exception will be thrown, because IO Streams by default do not throw exceptions. In fact IO streams were designed before we had exceptions in C++. So, in order to have our ofstream throw exceptions we have to instruct it to do so before we even open the file:

void save() { std::ofstream f; f.exceptions(ios_base::failbit | ios_base::badbit); f.open("work.txt"); f << provideA(); f << provideB(); } // flush in destructor

We start with initializing a dummy fstream , which cannot fail. We tell when to report exceptions. And then when we open the file, we will get an exception thrown if opening the file fails. Now, what happens if buffering B in the second write instruction fails? We will have written only partial data, and the destructor will flush it into disk. Saving half of data may be worse than not saving data at all. But that is not the biggest concern. As long as we are just using << we are only writing the contents into the internal buffer. The real write to the disk is going to be performed in destructor. It is this write to the disk that is the most likely operation to fail, because now we are really messing with the filesystem. What happens if this write fails? The answer is: nothing. This happens in the destructor. The Standard Library is very cautious not to throw exceptions from destructors. So whatever fails, the library will keep it secret. You will not be informed by any means. Function save() will return fine, and you will be led to believe that it succeeded, but no data will really be stored. This happens when one too literally follows the rule, “do not throw from destructors.”

But I am not saying that you should start throwing from destructors. I am saying, you should design and use your types in such a way that destructors never need to signal failure. Our code could be rewritten like this:

void save() { std::ofstream f; f.exceptions(ios_base::failbit | ios_base::badbit); f.open("work.txt"); f << provideA(); f << provideB(); f.flush(); // write data (can throw) } // only close

Now, function flush() performs the write to disk. Even though it is obvious that the last operation on the ofstream is to flush data to disk, we still want to write it down explicitly. This way everyone can see that the write is here. We can see that it can be canceled if preceding operations failed. We can see that it can fail, and cause subsequent operations to be canceled.

Now the destructor does not have to write anything. It only needs to release the file handle to the operating system. Can this operation fail? Yes. But such failure does not require our callers to be canceled: we have done our job: all data has been written to file in the previous operation. We may be leaking the resource now, but this is not what our callers rely on: they can move on.

Our code is not as short as it could have been; but it is correct. There is something more than just not being short: now the programmer becomes exposed to the fact that writing to file is done in staged: first buffer, then flush. But I think it is in fact desired. In C++, which is performance sensitive, such design strategy as buffering, is part of the contract.

Note: In the above example, I call function flush() even though function close() seems more appropriate. Function close() provides additional useful information: our intention is not to write to this file again (in this function). We will explore this in detail in the next post.