The latest version of this document can be found online at https://dr-knz.net/measuring-errors-vs-exceptions-in-go-and-cpp.html . Alternate formats: Source , PDF .

This document was designed as Jupyter Notebook . The notebook and accompanying data files can be downloaded here .

The following document investigates the performance of signalling errors from functions in Go and C++. In contrast with the previous analyses that focused on a single topic, here three separate topics are investigated:

There are two common mechanisms to signal uncommon errors in modern programming languages:

returning multiple values, one of which can be set to indicate an error condition. This is the most common mechanism in Go, and is promoted in Rust via the Result type. In this mechanism, each intermediate caller must check the error value and switch to an alternate control path if an error is detected. The overhead cost of these intermediate checks is paid on every call, even when errors are uncommon .

. returning simple values, and throwing (or raising) an exception to indicate an error condition. The common code path is simple; the language run-time system is responsible for stack unwinding to propagate exceptions to the top level caller where they are handled. This is the most common mechanism in Java, and well-supported by most languages (including Go, where exceptions are called “panics”).

The emphasis in the first point above gives us the working hypothesis for the present analysis: since intermediate error returns incur a price paid even when error do not occur, they must be worse for performance. How much so?

We can predict this will be a trade-off, based on the following observations:

the intermediate checks on the common case are costly, so “it must be better for performance” to not have them; however

when using exceptions, the point in the top level caller where exceptions are caught must set up an exception handler, which must happen everytime even if the error case is uncommon. This may be costly too!

So the trade-off is really comparing the cost of the intermediate checks vs. the cost of setting up an exception handler.

We can then further predict what these costs will be.

Predicting the cost of error signalling via error returns Based on my previous analysis on the Go calling convention, we can make an educated guess as follows: in Go, an error result “costs” two words of storage because error is an interface type; also, the result values are passed via memory and copied at every intermediate call, so we will see error propagation incur two memory stores to return an error value alongside the main result in leaf functions; two memory loads and a conditional branch to check the error on intermediate calls; plus two additional memory stores on intermediate calls to propagate the error value (or no error) on their return path.

is an interface type; also, the result values are passed via memory and copied at every intermediate call, so we will see error propagation incur two memory stores to return an error value alongside the main result in leaf functions; two memory loads and a conditional branch to check the error on intermediate calls; plus two additional memory stores on intermediate calls to propagate the error value (or no error) on their return path. in C++, an error result usually costs just one word, because either it is a simple type or a heap-allocated object and C++ uses simple pointers to refer to them (the vtable pointer, if an abstract base class is used, is stored in the object itself, not its reference). Assuming the main result is a simple value too, both the main result value and the error value can be passed using registers. The overhead is thus one register initialization in the leaf function; plus one register test and conditional branch in the intermediate calls; and one more register initialization on intermediate calls in their return path. Overall, we can thus expect an overhead twice larger in Go compared to C++, based on the instruction count alone. We will test this hypothesis experimentally. Of note, the cost of signaling errors via error returns is multipled by the size of the computational work performed; namely, by the number of dynamic calls and returns during the work. It is thus not a fixed overhead. We will come back to this.