During the evolution of the Unix time API from 1968 onwards, typical machine word lengths have changed twice - from 16 to 32 and then to 64 bits, which is typical today (in 2017) and seems unlikely to change in the foreseeable future.

The changes in word lengths have left some scars in the Unix API. At the very beginning time_t was defined as a signed 32-bit quantity, so it couldn’t be held in the 18-bit registers of the PDP-7 or the 16-bit registers of the later PDP-11. This is why many of the Unix time calls take a pointer to time_t rather than a time_t ; by passing the address of a 32-bit span in memory the design could get around the narrowness of the register width. This interface glitch was not fixed when word lengths went to 32 bits.

A more serious problem is that 32-bit Unix time_t counters will turn over just after 2038-01-19T03:14:07Z. This is expected to cause problems for embedded Unix systems; it is difficult to anticipate their magnitude, but we can only hope the event will be the same sort of damp squib that the Year 2000 rollover turned out to be.

Modern Unix systems use a signed 64-bit time_t . These counters will turn over approximately 292 billion years from now, at 15:30:08 on Sunday, 4 December 292,277,026,596. No problems with this are presently anticipated.

Register length limits have also affected the representation of time at subsecond precision. As a workaround against them, and to avoid floating-point roundoff and comparison issues, the C API traditionally avoided representing fractional-second times as a scalar float or double quantity. The reason had to do with the precision offered by different float formats:

Table 3. Float precision Word size Mantissa Exponent Historical name 32-bit float 23 8 Single precision 64-bit float 52 11 Double precision 128-bit float 112 15 Quad precision

(The sums look off-by-one because of the sign bit. You can learn more about the IEEE754 floating-point formats that give rise to these numbers at [FP]. They originated on the VAXen that were the workhorse machines of Unix in the early 1980s and are now implemented in hardware on Intel and ARM architectures, among many other places.)

When the Unix time API first had to represent subsecond precision, microsecond resolution was required to represent times comparable to a machine cycle.

The problem was that a microsecond count requires 20 bits. A microsecond-precision time with 32 bits of integer part is on the far edge of what a double-precision float can hold:

seconds: 32 bits microseconds: 20 bits -------- total: 52 bits

That would barely have fit and seemed likely to be a bit flaky in actual use due to floating point rounding. Doing any math with it would lose precision quickly. Trying to go finer-grained to nanosecond resolution would have required 11 more bits that weren’t there in double precision.

Thus, quad-precision floating point would have been required for even 32-bit times. Given the high cost of FPU computation at the time and the near-waste of 64 bits of expensive storage, this took float representation out of the running.

This is why fractional times are normally represented by two-element structures in which the first member is seconds since the epoch and the second is an integral offset in sub-second units - originally microseconds.

The original subsecond-precision time structure was associated with the gettimeofday(2) system call in 4.2BSD Unix, dating from the 1980s. It looks like this:

struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ };

Note the microsecond resolution. The newer POSIX time functions use this:

struct timespec { time_t tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ };

(No, that’s not a paste error. The struct timespec members really do have tv_ prefixes on their names. This seems to have been someone’s attempt to reduces required code changes. It was probably a bad idea.)

This has nanosecond resolution. The change is related to the tremendous increase in machine speeds since the 1980s, and the correspondingly increased resolution of hardware clocks. While it is conceivable that in the future we may see further generations of these structures in which the subsecond offset is in picoseconds or smaller units, some breakthrough in fundamental physics would be required first - at time of writing in 2014 processor cycle times seem to be topping out in the roughly 0.1ns range due to quantum-mechanical limits on the construction of electron logic.

Although the timeval and timespec structures are very useful for manipulating high-precision timestamps, there are unfortunately no standard functions for performing even the most basic arithmetic on them, so you’re often left to roll your own.

Another structure, used for interval timers and describing a time interval with nanosecond precision, looks like this:

struct itimerspec { struct timespec it_interval; struct timespec it_value; };

While the C time API tends to shape the time APIs presented by higher-level languages implemented in C, these subsecond-precision structures are one area where signs of revolt are visible. Python has chosen to instead accept the minor problems of using a floating-point scalar representation; Ruby uses integral nanoseconds since the Unix epoch. Perl uses a mixture, a BSD-like seconds/microseconds pair in some functions and floating-point time since the Unix epoch in others.

On today’s true 64-bit machines with relatively inexpensive floating point the natural float representation of time would look like this:

seconds: 64 bits fractional seconds: 48 bits -------- total: 112 bits