UNIX was a terrific workhorse for its time, but eventually the old nag needs to be put out to pasture. David Chisnall argues that it's time to retire UNIX in favor of modern systems with a lot more horsepower.



For more information on *nix-based operating systems, visit our Linux Reference Guide or sign up for our Linux Newsletter

In 1971, the UNIX Time Sharing system, First Edition was released. This simple operating system allowed multiple users to use a single, low-end minicomputer. It became popular, largely due to the fact that the source code was available for free to universities, which produced a generation of graduates who grew up learning UNIX.

Rather than developing their own operating systems, a lot of companies licensed the UNIX source code and produced their own derivatives to run on their hardware. Eventually, UNIX replaced most commercial operating systems.

In the last decade, free clones and derivatives of UNIX have started to take over from the old-guard UNIX systems. In terms of source code, these versions share very little, if anything, with their predecessors, but in terms of design and philosophy a lot can be traced back to the original roots.

UNIX has a lot of strengths, but like any other design it's starting to show its age. Some of the points listed in this article apply less to some UNIX-like systems, some apply more.

Everything Is a File (Unless It Isn't)

Everything in a UNIX system is a file. Well, except things that aren't files, such as sockets. This is widely regarded as one of the defining points of UNIX. What is a file in UNIX? It's a collection of bytes. No type information is encoded, so the only way of understanding the contents of a file is to already know about it.

The file metaphor is becoming increasingly strained in UNIX:

Physical disks are files—these have a fixed size, but you can seek to any point on the disk.

Serial ports are files—these can be read from and written to, but seeking in a serial port has no meaning.

Normal files are also files, and these can be read sequentially or randomly and even increased or reduced in size. Of course, a program that's given a filename has no way of knowing what kind of operations are possible on a file, other than to try an operation and see if it fails. In good UNIX tradition, some of these operations, such as locking an NFS-shared file, will appear to work but silently fail.

The often stated advantage of this paradigm is that you can connect programs to devices and they'll just work, without being specially designed to interface with the device. This was almost true at one point. Writing to a text-only line printer was exactly the same as writing to a text-only terminal or writing to a text file. Now, however, most people tend to deal in a little more than just plain text. If I have a program that outputs an image, can I just send that image to a terminal and have it display? Can I send it to the printer in the same way? Well, if I happen to have an X server that supports the XPrint extension, the answer is maybe. If I don't, then I have to send it in X drawing calls to the screen, in PostScript to the printer, and in a serialized byte stream to a file. Having the same interface for these devices does no good at all when I have to understand the device on the far end—far less good, in fact, than a higher-level abstraction layer would do me.