In this article we’ll go on a journey from 4-bit microcontrollers to room-sized mainframes and learn how porting C to each of them helped people separate the essence of the language from the environment of its birth. I’ve found technical manuals and videos for this article to help bring each computer to life.

It’s amazing that, by carefully writing portable ANSI C code and sticking to standard library functions, you can create a program that will compile and work without modification on almost any of these weird systems.

I hope that being exposed to these examples will help you write code more portably, and dispel the belief that current computers with their multiple cores, cache hierarchy, and pipelining, are somehow too alien for C. A language tough enough to handle the diversity of old machines is tough enough to handle today’s relatively homogeneous CPUs.

To prepare this article I worked backward from the book “Portable C” (by Henry Rabinowitz), searching for architectures that illustrate each of the pitfalls he points out. You should read the book for a great explanation of what the author calls “C-World,” a semantic model of the execution of a C program.

Unisys 1100/2200

While this video doesn’t show the true computer in operation, you can still see the shape of one of the control panels. The gentleman in the video has quixotic fascination with it.

Video source: youtube

The first unusual thing about the architecture is its word size. You may be familiar with datatypes having powers-of-two bit sizes, but these Unisys series went with multiples of 9! The word size is 18 bits, and the C compiler for the platform uses:

char — 9

short — 18

int — 36

long — 36

long long — 72

(The Honeywell 6000 was another machine with 9-bit char and 36-bit word.)

Just to make matters more interesting, the oddly sized integers use ones’ complement binary arithmetic. That’s right, in this system there are distinct values for positive and negative zero. (CDC computers also used ones’ complement.)

The thirty-six bits integers can hold a lot, but guess what they can’t hold on this architecture? Pointer values. Section 8.6.1 of the C manaul for the Unisys 2200 says:

A pointer in UC cannot be treated as an integer. A UC pointer is a two-word structure with the base virtual address (VA) of a bank in the first word and a bit-word pointer in the second word. The bit-word pointer is necessary since the 2200 hardware does not have byte pointers; the basic pointer in the 2200 hardware is a word (36-bit) VA pointer that can only point to words. The bit-word portion of the UC pointer has a bit offset in the first 6 bits of the word and a word offset in the lower 24 bits of the word. If you convert (cast) a UC pointer to a 36-bit integer (int, long, or unsigned), the bit offset is lost. Converting it back to a C pointer results in it pointing to a word boundary. If you add 1 to the integer before converting it back to a pointer, the pointer points to the next word, not the next byte. A 36-bit integer is not capable of holding all the information in a UC pointer.

If you think regular pointers are demanding, section 8.6.2 says that function pointer requires a full eight words!

A function pointer is 8 words long and has a completely different format. Only two words out of the 8-word function pointer are actually (currently) used by UC-generated code. (The second and third words.) (More of the words are used by other UCS languages such as FORTRAN and COBOL.) You can cast UC data pointers to function pointers and function pointers to data pointers and not lose any information. The two words that hold information are simply moved back and forth.

Finally, if you think the Unisys is confined to the pages of history, you’re mostly right, but not entirely. They still manufacture and sell the “ClearPath Dorado” which uses the 2200 architecture.

Unisys ClearPath A Series

Video source: youtube

Like the previous Unisys machine, the ClearPath has an unusual word size. Here are the integral data type sizes for the ClearPath C compiler:

char — 8

short — 48

int — 48

long — 48

long long — ??

This machine uses neither twos’ complement nor ones’ complement signed arithmetic – it uses sign-magnitude form instead.

AT&T 3B (or 3B2)

Video: Video source: youtube

A reliable old machine that had a devoted community. Fairly normal architecture, except it is big endian, unlike most computers nowadays. The char datatype is unsigned by default. Finally, the standard compiler for this architecture guarantees that function arguments are evaluated from left to right.

Symbolics Lisp Machine 3600

Video source: youtube

C is so portable that someone wrote a compiler – Symbolics C – for a computer running Lisp natively. Targeting the Symbolics Lisp machine required some creativity. For instance, a pointer is represented as a pair consisting of a reference to a list and a numerical offset into the list. In particular, the NULL pointer is <NIL, 0> , basically a NIL list with no offset. Certainly not a bitwise zero integral value.

The word size is 16 bits. There are no alignment requirements for data, although instructions must be on a 16-bit boundary. Here are the sizes of integer types defined by the compiler on the machine:

char — 8

short — 16

int — 16

long — 16

Motorola 68000

Video source: youtube

This processor found its way into many game consoles, embedded systems, and printers. It’s a pretty normal architecture, though big endian with a compiler default of unsigned chars. Also pointers (32 bits) are a different size than ints (16 bits).

One significant quirk is that the machine is very sensitive to data alignment. The processor had two-byte granularity and lacked the circuitry to cope with unaligned addresses. When presented with such an address, the processor would throw an exception. The original Mac (also based on the 68000) would usually demand the user restart the machine after an alignment error. (Similarly, some Sparc machines would raise a SIGBUS exception for alignment problems.)

Data General Eclipse

Video source: youtube

This machine uses a different numbering scheme for character- and integer-pointers. The same location in memory must be referred to by different addresses depending on the pointer type. A cast between char* and int* actually changes the address inside the pointer. Chris Torek recounts the details.

Cray T90

This machine provides another cautionary tale about trying to manipulate pointer values as if they were integers. On this architecture char* or void* are secretly word pointers with an offset stored in the three unused higher-order bits. Thus incrementing char* as an integer value would move to the next word but keep the same offset.

Prime 50 series

Video source: youtube

Notable for using a NULL pointer address that is not bitwise zero. In particular it uses segment 07777, offset 0 for the null pointer. (Some Honeywell-Bull mainframes use 06000 for the NULL pointer value, which is another example of non-zero NULL.)

R3000 MIPS

Video source: youtube

The DECstation uses the R3000 processor. It could be switched into either little- or big-endian mode at the programmer’s discretion. One quirk is that the processor raises an exception for signed integer overflow, unlike many other processors which silently wrap to negative values. Allowing a signed integer to overflow (in a loop for instance), is thus not portable.

Acorn Archimedes A3010

Video source: youtube

This computer is actually the origin of the ARM architecture which we commonly find in mobile phones and Arduinos. The Acorn in particular uses ARM2, and has a 32-bit data bus and a 26-bit address space. Like the Motorola 68000, the ARM2 raises a SIGBUS exception for unaligned memory access. (Note that the Arduino is an instance of a compiler still using 16-bit ints.)

8086/8088/80286

Video source: youtube

Everyone who writes about programming the Intel 286 says what a pain its segmented memory architecture was. Each memory segment can address up to 64 Kb, which is the largest contiguous region of memory that C could allocate per data object. (Thus size_t is smaller than unsigned int in this architecture.)

Because the full address of any word in memory was specified by a segment and offset, there are 4096 ways of referring to it by some combination of the two. (For instance address 0x1234 can be referenced as 0123:0004, 0122:0014, etc.) Also variables declared next to one another may live in different segments, far apart in memory. This breaks some highly inadvisable tricks people used, like zeroing out a block of several variables by memset’ing the whole memory range between their addresses.

Despite this awkwardness, the personal computer was hot, and as of 1983 Byte Magazine (Vol 8, Issue 8) identified that there were nine different C compilers for the IBM PC! I found the manual for one of them, Lattice C. It’s the same compiler used on other IBM products such as the System 370.

In Lattice C both short and int are 16 bits, but long is 32. Char is signed by default, and of course the x86 is little endian.

Intel 8051

Sticking with the theme of memory complications, enter the 8051. It’s a microcontroller that uses a “Harvard architecture.” This means it communicates with different types of memory attached to the same system. It uses word-oriented addressing for the ROM space and byte-oriented addressing for the RAM space. It needs differently sized pointers for each.

Many addresses are ambiguous, and could meaningfully point to either the RAM or ROM bank. 8051 compilers such as Crossware use a slightly larger “generic” pointer which tags the memory class in its high byte to resolve the ambiguity.

HP Saturn

Video source: youtube

The Saturn family are 4-bit microprocessors developed by Hewlett-Packard in the 1980s for programmable scientific calculators and microcomputers. The video above shows the HP-71B calculator, which is really more of a general purpose computer in strange packaging. You load bulk data into it by zipping a magnetic band through some kind of a reader slot.

The Saturn processor doesn’t have hardware instructions to do signed arithmetic. That has to be emulated using combinations of other assembly instructions. Thus unsigned number operations are more efficient. No surprise that char defaults to being unsigned.

The memory is interesting. Its addresses are nibble-based, and can address 1M nibbles = 512Kb. Pointers are 20-bits, but stored as 32-bits. Saturn C datatypes are pretty normal:

char — 8

short — 16

int — 32

long — 32

long long — 64

float — 64

double — 64

MOS 6502

Video source: youtube

This was one of the first low-cost 8-bit microprocessors, and it found its way into all kinds of systems including the Apple II, Commodore 64, and Nintendo Entertainment System. This processor is downright hostile to C compilers. Check out the crazy optimization suggestions for the CC65 compiler.

Some of the troublesome things about this architecture:

There is no multiply or divide operation in the assembly, it has to be emulated with other instructions.

Accessing any address higher than the “zero page” (0x0 to 0xFF) causes a performance penalty.

The CPU does not provide any 16-bit register or any support for 16-bit operations.

It only comes with a single “true” register.

However it accesses its memory in one cycle, so a programmer can use the zero page as a pool of 256 8-bit registers.

The 6502 helps reveal the edge of portability, the place where C’s “luxuries” are too costly.

PDP-11

Video source: youtube

The C home planet. Not much to say about it, because things work smoothly. The real surprises happened when porting PDP code to other machines. Pointers of all types and integers can be interchanged without casting.

One strange thing about this machine is that whereas 16-bit words are stored in little endian, 32-bit long ints use a weird mixed endian format. The four bytes in the string “Unix” when stored in the PDP-11 are arranged as “nUxi” if interpreted as big endian. In fact that scrambled string itself resulted when porting code from the PDP to a big endian machine.

VAX-11/780

Video source: youtube

The VAX is like a 32-bit PDP. It’s the next machine in PDP evolution. People enjoyed coding for the VAX with its nice flat memory and uniform pointers of all types. People liked it so much that the term “VAXocentric” referred to sloppy coding by those who got too comfortable with the architecture and who didn’t bother to learn how other computers differed.

The assembly for x86-64 looks externally similar to VAX, and people originally believed VAX would outlast Intel. This proved incorrect, as the “attack of the micros” destroyed the mainframe and minicomputer markets.

Datatype sizes:

char — 8

short — 16

int — 32

long — 32

Programs run faster with data aligned properly, but there is no strict alignment requirement, unlike previous architectures such as the IBM 360, PDP-11, Interdata 8/32 etc. Size and alignment attributes are logically independent. The VAX-11 C compiler aligns all the basic data types on address boundaries which are multiples of the size of each type.

Other facts: the VAX C compiler doesn’t guarantee left-to-right evaluation of function arguments. Chars are signed by default. The PDP was OK with division by zero and returned the dividend, but VAX causes an unmaskable trap.

Bell Labs wrote an interesting report about porting programs from the PDP to VAX, and some of their recommendations were adopted by ANSI C.

Conclusion

If these digital delectations make you want to learn more about writing portable code, then the best place to learn is in one of the excellent books on the topic. The one by Henry Rabinowitz mentioned earlier is great, as is another by Mark Horton. Good books are another strength of the C language. Unlike new trendy languages, C has been around long enough to accumulate professional and well regarded literature.