Charles Karney showed me a clever trick for computing log(1+x). It only takes a couple lines of code, but those lines are not obvious. If you understand this trick, you’ll understand a good bit of how floating point arithmetic works.

First of all, what’s the big deal? Why not just add 1 to x and take the log? The direct approach will be inaccurate for small x and completely inaccurate for very small x. For more explanation, see Math library functions that seem unnecessary.

Here’s how Charles Kerney implements the code in C++:

template<typename T> static inline T log1p(T x) throw() { volatile T y = 1 + x, z = y - 1; return z == 0 ? x : x * std::log(y) / z; }

The code comes from GeographicLib, and is based on Theorem 4 of this paper paper. I’ve read that paper several times, but I either didn’t notice the trick above or forgot it.

The code includes comments explaining the trick. I cut out the comments because I’ll explain the code below.

If y = 1+x and z = y-1 , doesn’t z just equal x ? No, not necessarily! This is exactly the kind of code that a well-meaning but uninformed programmer would go through and “clean up.” If x isn’t too small, no harm done. But if x is very small, it could ruin the code. It’s also the kind of code an optimizing compiler might rearrange, hence the volatile keyword telling optimizers to not be clever.

Imagine x = 1e-20 . Then y = 1+x sets y to exactly 1; a standard floating point number does not have enough precision to distinguish 1 and 1 + 10^-20. Then z = y-1 sets z to 0. While z will not exactly equal x in general, it will be a good approximation for x .

If z is not zero, log(y)/z is a good approximation to log(1 + x)/x because y is close to 1+x and z is close to x. Multiplying log(y)/z by x gives a good approximation to log(1+x).