Current computer graphics are fairly well known and understood. But how did we get here? The evolution of computer graphics is intertwined with textual display, and it is difficult to consider the two separately.

An old saying has it that a picture is worth a thousand words. The exact quantification of the value of imagery versus text appears to vary somewhat with subject matter, and is probably better left to psychologists and social scientists. But there is little question of the kernel of truth in the saying, and it has been a driver of computer architecture for many years.

Computer graphics are taken for granted today. But it has been a long and painful struggle, with hardware rarely keeping up with the demand for better images. In English, there are a relatively small number of characters which comprise text. The same is not true of images: graphics are computationally intensive. They always seem to take as much speed and memory as there are available. But the demand was high enough that early computer graphics could be fairly crude and still be in demand.

From blinking lights to plotters

Getting computers to type text was, in comparison, a simple process. Even in the early days of computing, there were existing devices which could translate a simple binary pattern into text. The military, for example, had used teletype machines for many years. Programming a computer to output the pattern that outputs the code for a textual character on a teletype machine is relatively simple.

Early computers used mostly flashing lights, with punched cards or paper tape for input and output. When there is only room for a few hundred instructions, you take input and output in its simplest form. But sometimes the available technology drives applications, and sometimes, the need to do something becomes a driver of seeking new technology.

The potential for getting a computer to produce a picture of the data wasn’t missed. It would be more valuable if the picture were produced rapidly enough for the user to interact, but even producing an image of some sort that represented the computer contents or calculations in the recent past had its merits.

IBM was offering an output printer on its 701 model in 1952. It also offered a primitive graphics solution (the model 740 “Cathode Ray Tube Output Recorder”) in 1954.

The 740 demonstrates just how big the demand for graphics was, and how minimal a capability was considered meaningful. The 740 was a cathode ray tube to which a camera could be attached. Digital-to-analog converters drove the cathode ray tube, slowly drawing lines, based on the digital outputs of the computer. This method gradually came to be known as “vector graphics,” to distinguish it from other technologies.

Lines were plotted one point at a time. IBM justifiably bragged (at the time) that points were plotted at a rate of 8,000 per second, with a display accuracy of a given point of only 3%, but with good repeatability. You couldn’t reliably scale the resulting image, but the image would have at least conceptual value.

Typically, the camera shutter was opened when the drawing started, and closed when it finished. At that time, the film could be developed, and the image could be viewed later the same day. Needless to say, this tech wasn't suitable for playing video games. Of course, one could maintain a simpler image on the display simply by repeating the drawing instructions at a fairly high rate. But this used most or all of the CPU time, and limited the detail which could be drawn.

The 740 had a sister display, the 780, which had a long-persistence phosphor (20 seconds). While not as precise, when paralleled with the 740, it allowed the operator to verify that the image being produced was indeed the one desired. When you have to wait several hours for the film image to be developed, that’s a good idea.

But there is another way to get an image with a slow computer, and one that will yield an image to the users much sooner: a plotter. Gene Seid and Robert Morton, two of the founders of CalComp, developed the idea in 1953, but lack of funding kept the device off the market until 1959.

The idea is simple: drive a pen on two axes. That takes a pair of stepping motors, and something to put the pen down at the start of a line, and lift it again at the end. Software can calculate when the pen should be stepped in either axis to draw straight lines between two points, curved lines, or whatever.

In time, fairly sophisticated software packages were developed which isolated the users from the plotter, and allowed them to describe the image in more human-friendly terms such as “m=1, plot y=mx+b for x=1 to 6,” and so on. If the plotter drew a shape, the shape could be filled with solid, dashed, or dotted lines to simulate black or grayscale. Calculating the bounds of the lines within the shape became a part of early plotting packages. Unless one is doing ray tracing, or something similar, the bounds for shading or color within a surface must still be calculated today, and the techniques still draw on these early methods.

In time, plotters had additional pens added to do drawings in color. But the number of colors was limited, and the drawings were still vectors.

Listing image by From PDP-1 to PS3, what a difference 45 years makes