Like the ampersand, the ‘@’ symbol is not strictly a mark of punctuation; rather, it is a logogram or grammalogue, a shorthand for the word ‘at’. Even so, it is as much a staple of modern communication as the semicolon or exclamation mark, punctuating email addresses and announcing Twitter usernames. Unlike the ampersand, though, whose journey to the top took two millennia of steady perseverance, the at symbol’s current fame is quite accidental. It can, in fact, be traced to the single stroke of a key made almost exactly four decades ago.

In 1971, Ray Tomlinson was a 29-year-old computer engineer working for the consulting firm Bolt, Beranek and Newman.1 Founded just over two decades previously,2 BBN had recently been awarded a contract by the US government’s Advanced Research Projects Agency to undertake an ambitious project to connect computers all over America.3 The so-called ‘ ARPANET ’ would go on to provide the foundations for the modern Internet, and quite apart from his technical contributions to it, Tomlinson would also inadvertently grant it its first global emblem in the form of the ‘@’ symbol.

The origins of the ARPANET project lay in the rapidly advancing state of the art in computing and the problems faced in making best use of this novel resource. In the early days, leaving a ruinously expensive mainframe idle even for a short time was a cardinal sin, and a so-called ‘batch processing’ mode of operation was adopted to minimise down time. Each computer was guarded by a high priesthood of operators to which users submitted their programs (often carried on voluminous stacks of cards) for scheduling and later execution.4 The results of a such a ‘batch job’ might arrive hours or days later, or sometimes not at all: a single error in a program could ruin an entire job without any chance for correction. As time wore on, however, processing power grew and costs fell — by the mid-1960s, room-sized mainframes had been joined by newly compact ‘minicomputers’ measuring a scant few feet on a side5 — and the productivity of users themselves, rather than of the computers they programmed, became the greatest problem. In place of batch processing arose a new ‘time-sharing’ model wherein many users could ‘talk’ at once to a single computer, typing commands and receiving immediate feedback on their own personal terminal.6

The most common terminal design of the era was the ‘teletype’, a combined keyboard and printer on which a user could type commands and receive the computer’s printed reply.7 There were terminals which used other means to display input and output — notably cathode ray tubes, or CRT s — but teletypes were near-ubiquitous, spawning hardened military versions and 75-pound ‘portables’.8 Unlike today, where a keyboard and display normally occupy the very same desk as their host computer, teletypes were routinely separated from their hosts by hundreds of miles; a teletype might just as easily be in the next city as the next room, communicating with its host computer over a telephone line.

Despite this ability to be geographically distant from its host, each terminal was still inextricably tethered to a single computer. Moreover, many models of computer understood different sets of commands, making it difficult or even impossible to move programs from one model to another. Robert Taylor, the head of ARPA ’s Information Processing Techniques Office, was well acquainted with both problems. His office contained three teletypes connected to three different computers in Santa Monica, Berkeley and MIT respectively, and each one spoke its own unique dialect of commands. Taylor said of the situation:

For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. [of Santa Monica] and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.9

Thus, despite their ever-increasing complexity and utility, for the most part computers still lived in splendid isolation. It was the combination of these factors — the attractions of ever-increasing power and flexibility, impeded by a frustrating inability to share information between computers — which spurred ARPA to investigate a network linking many computers together. As Taylor concluded:

I said, oh, man, it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPANET .9

In 1968, the agency solicited bids from 140 interested parties to build the experimental network.10 Although it would not be the first computer network, it was by far the most ambitious: not only would it span the continental United States (and, eventually, cross the Atlantic via satellite link11) but it would be the first to use a novel and as yet untested technique called ‘packet switching’ on a grand scale. Packet switching relied not on a direct connection between sender and recipient, but instead sent messages from source to destination by a series of hops between sites adjacent on the network, fluidly routing them around broken or congested links.12

Some of the heavyweights of the time did not even bid. IBM , firmly wedded to the traditional (and expensive) mainframe computer, could not see a way to profitably build the network,13 while Bell Labs’ parent company AT&T flatly refused to believe that packet switching would ever work.14 In the end, an intricately detailed 200-page proposal submitted by relative underdogs BBN secured the contract, and construction of the ARPANET began in 1969. The project was a success, and by 1971 nineteen separate computers were communicating across links spanning the United States.15

Working in BBN ’s headquarters, Ray Tomlinson had not been directly involved in building the network, but was instead employed in writing programs to make use of it.16 At the time, electronic mail already existed in a primitive form, working on the the same principle as an office’s array of pigeon-holes: one command left a message for a named user in a ‘mailbox’ file, and another let the recipient retrieve it. These messages were transmitted temporally but not spatially, and never left their host computer — sender and recipient were tied to the same single machine.17

Taking a detour from his current assignment, Tomlinson saw an opportunity to combine this local mailbox system with the fruits of some of his previous work. He used CPYNET , a command used to send files from one computer to another, as the basis for an improved email program which could modify a mailbox file on any computer on the network, but the problem remained as to how such a message should be addressed.16 The recipient’s name had to be separated from that of the computer on which their mailbox resided, and Tomlinson was faced with selecting the most appropriate character to do so from the precious few offered by the keyboard of his ASR-33 teletype.

Looking down at his terminal, he chose the ‘@’ character.

With four decades of email behind us, it is difficult to imagine anyone in Tomlinson’s situation choosing anything other than the ‘@’ symbol, but his decision to do so at the time was inspired in several ways. Firstly, it was extremely unlikely to occur in any computer or user names; secondly, it had no other significant meaning for the TENEX operating system on which the commands would run, and lastly, it read intuitively — user ‘at’ host — while being just as unambiguous for the computer to decipher. His own email address, written using this newly formulated rule, was tomlinson@bbn-tenexa , signifying the user named tomlinson at the machine named bbn-tenexa , the first of the company’s two PDP-10 mainframes running TENEX .18

With the modifications to his mail program completed and an addressing scheme decided, Tomlinson typed out a brief message on the second machine and sent it to his mailbox on the first. The message was broken down into packets as it entered the ARPANET , which then routed each packet individually to its destination and reassembled them into a complete message at the other end, before it was finally appended to his mailbox on bbn-tenexa . In real terms, the two machines occupied the same office, and the first network email travelled a physical distance of only around fifteen feet.19 Perhaps appropriately to this anticlimactic first step, Tomlinson has since forgotten the contents of the message:

I have seen a number of articles both on the internet and in print stating that the first email message was “QWERTYUIOP”. ’Taint so. My original statement was that the first email message was something like “QWERTYUIOP”. It is equally likely to have been “TESTING 1 2 3 4” or any other equally insignificant message.20

Half-fearing the wrath of his superiors were they to discover his pet project, Tomlinson initially kept his invention to himself. As a colleague recalled, “When he showed it to me […] he said, ‘Don’t tell anyone! This isn’t what we’re supposed to be working on.’”21

His concern was misplaced: email quickly became the fledgeling network’s first ‘killer app’, gaining influential converts such as ARPA ’s director Steve Lukasik. Lukasik took to travelling with one of the aforementioned ‘portable’ teletypes so he could check his mail even when out of the office,22 and his managers quickly found email was the only reliable way to keep in touch with him.23 This viral quality led to an explosion in the use of email across the network, and by 1973 — only two years after the first email was sent from one side of Ray Tomlinson’s office to the other — it accounted for three-quarters of all traffic on the ARPANET .24

Tomlinson’s makeshift programs were replaced many times over as the ARPANET expanded and ultimately evolved into the modern Internet, but the use of the ‘@’ symbol remained a constant. As one half of an indivisible double act with the world wide web, email became synonymous with the Internet as a whole, and the ‘@’ symbol’s place in history was assured.