A computer has two different clocks. First, the wall-clock, the one we all known and that is used to get the current time of the day.

This clock is subject to potential variations. For example, if it is synchronized with NTP (Network Time Protocol). In this case after synchronization, the local clock of our server can jump backward or forward in time. So measuring a duration from the wall-clock can be biased.

The second clock is called the monotonic-clock. Here, we have a guarantee that the time always moves forward and will not be impacted by variations leading to jumps in time.

The only change is potential frequency adjustments. Basically, if our server detects that its local quartz is moving faster or slower than the NTP server, it can adjust its clock rate. But again, no jump in time with the monotonic-clock.

Therefore, if we have to measure durations, we must use the monotonic-clock. This rule of thumb is only valid for local duration measurements though. Indeed, the monotonic-clocks of two different servers are by definition not synchronized. So, measuring a distributed execution based on these clocks will not be accurate.

Let’s come back with our previous example then. The problem was because in Java, System.currentTimeMillis() is based on the wall-clock, so subject to variations and even negative durations. In comparison, System.nanoTime() is monotonic so we should have used this one.

What about another language like Go for example? The standard way to get a duration is based on the following code:

time.Sub(time.Duration) is actually based on the monotonic-clock according to the documentation.

As a conclusion, you should check how your programming language handles these two distinct clocks to make sure your durations are actually accurate.

Further Reading