Way back in the 90s, in an era when Swedish pop bands were getting regular US radio play and CVS was the optimal source control system, Alan worked on a remote-execution service for a “Unix-like” OS. One of his co-workers had just left the company, and Alan needed to track down a bug in a module which the co-worker had more-or-less owned during their tenure.

The specific block of C code in question looked roughly like this:

int nsel, pty_fd, net_fd; fd_set rdset; . . . FD_ZERO(&rdset); FD_SET(pty_fd, &rdset); FD_SET(net_fd, &rdset); nsel = select(max(pty_fd, net_fd), &rdset, NULL, NULL, NULL);

Now, select is a system call, and it’s a bit of a weird one, and Alan assumed that was where the root issue lied. FD_SET is a built-in macro which adds a file descriptor to a set, so in this example, we expect to place two file descriptors into our set.

We then use select to monitor those two file descriptors, so that we can get input from whichever one has bytes available.

At a glance, this code just looks wrong. The first parameter to select is the number of file descriptors. Passing max(pty_fd, net_fd) implies that we’re passing whichever file descriptor just happens to be the larger number, which makes no sense.

So whatever weirdness in the code Alan was observing, that must be the root cause, right? Well, max is the root cause, but not in the way you think. One year ago, Alan’s co-worker had placed this into source control:

#define max(x, y) 16 /* ((x) > (y) ? (x) : (y)) */

While you can get away with this sort of thing when providing a truly random number, you can’t do it when computing a max.

Alan adds:

There’s no real shame in having one’s work tree in that state while learning the (um … less-than-intuitive) interface to “select()”, but committing the code as is to CVS and leaving the company in good standing over a year later would probably prompt some blushing today.