Are we doomed yet? The computer-networked, digital world poses enormous threats to humanity that no government, no matter how totalitarian, can stop. A fully open society is our best chance for survival.

I've been talking to computers for over a decade. At the MIT Media Lab in the early '90s, I was a test subject in a doctoral research project on voice navigation. As I went about my weekly duties as a programmer in the Speech Group, I used a tool called xspeak to switch between windows on the desktop. Sometimes I did this in front of a camera, so my first encounter with voice-recognition had an element of glamour. The program had a vocabulary of only a few dozen words, but I enjoyed the novelty and not having to grab the mouse every 20 seconds.

I didn't consider using voice-recognition at home, however, until a couple of years later when I developed tendinitis in my wrists. I was paying the rent as a programmer and just didn't have the hands anymore to work on my fiction during the off-hours. In those pre-Pentium days, you had to pause a half-second between each word -- what's called "discrete speech." Even so, I was so impressed by the accuracy of the Kurzweil product of the time that I coughed up a month's pay to Kurzweil and dictated most of my novel "Demiurge" into my computer one word at a time. In my spare hours, I made intricate Buddhist mandalas from colored sand and then wiped them away in order to cultivate patience and detachment.

Advertisement:

These days I am able to dictate continuously and script my own functionality. While writing for the computer game Deus Ex, I built a complete set of scripts in Dragon NaturallySpeaking to drive ConEdit, Ion Storm's proprietary conversation editor. I dictated over 8,000 lines of dialogue in a weird proto-language that sounded like this: "Placeholder speech. Replace speech. They're lying <period >. All they want is to study your tissues <period>. Update and close. Move up one. Camera shoulders left." I was a long way from chatting with Hal, but I was also a long way from when all I could say was the name of a window.

I have come to view all computer interaction as speech, whether I am speaking or clicking. If I voice-navigate to a car-dealer's Web site and spec out a new van, then for me it is literally true that my voice can express my longing for shag carpeting and a swank curtained bed -- and express it in precise terms to the industrial mechanisms necessary to bring such fantasies to life. I am restrained to a crude sign language and a limited vocabulary, but the software is improving. A generation of engineers is working hard to give me a formal yet versatile new language for such transactions.

The trend of all software applications is toward greater fluidity of expression because computers are symbol-processing machines. At Ion Storm, for example, we tie ourselves in knots to give game-players multiple solutions to problems, customizable characters, and branching or "multiform" story lines. We believe that a good computer narrative allows players to drive meaningful developments in the game world. To put that another way, we think that players should be able to participate in writing the story. The fact that our ideal in computer entertainment is the empowerment of end users to compose their own experiences is no accident. This ideal is common to every software application from PhotoShop to the pages of the World Wide Web. The freedom of expression in a 3-D world like Deus Ex -- the freedom to customize the self -- is a preview of the godlike powers of creation we will all have when the human-machine language progresses beyond crude signs to a true language of choice and customization.

Advertisement:

With power, however, comes potential danger. We have no reason to fear new fathers who color-correct their baby photos, but what will we do when DNA and nanoscale machinery are just as easy to manipulate, when each of us is a potential terrorist able to compose a new viral genome with drag-and-drop? Given the nature of the Web and file-sharing, would we have a prayer of suppressing dangerous knowledge that could be turned into novel weapons of mass destruction? What I argue below is that we don't have to suppress knowledge at all. The open pursuit of knowledge is actually our greatest weapon against the dangers taking shape around us.

I believe that the coming "self-replicating" threats described by Bill Joy and others are real dangers. I believe that individuals will someday trade the secrets of mass death as easily as the Magic players of today trade playing cards. Nevertheless, I am prepared to live with such a future. In fact, I believe that an open society like ours would be better equipped to deal with these threats than even the most efficient police apparatus.

I am alarmed by the ease with which our society is being frightened into abandoning its hard-won openness. Numerous ideas currently in circulation, taken together, foretell a future which might shock our late-capitalist sensibilities, but which could very well become our reality, by degrees, if we don't take the time now to ask fundamental questions about what we value as a people.

Advertisement:

- - - - - - - - - - - -

If the last two centuries can broadly be characterized as the Age of Machinery, then this century might be called the Age of Fashion, in all senses of the word. Fashion drives a market when choices multiply, and the new technologies are all about choice. Just as the player in a computer game fashions the story from increasingly fine-grained choices, so will real human beings fashion their lives from increasingly diverse and customizable materials.

Advertisement:

The all-digital world of computer games is an exaggeration of the consumer paradise we are promising to build for ourselves. In Deus Ex, the player can select his race as though he is picking out something to wear. Other games, including the upcoming Deus Ex: Invisible War, let you pick your sex, Ultima VII being another memorable example. In real life, we are already able to select the sex of a child, and it won't be long before we can select a healthy child, then a child with perfect eyesight, then who knows. We will be able to fashion a human being with software built upon the twin-shuffle microcode of DNA, perhaps with friendly applications like "The Gene Construction Kit" by Textco or "Visual Cloning" by Redasoft, software currently used to visualize and design plasmid vectors. The deep language of Nature will reveal itself through simple menus on medical Web sites.

We are no longer content that science devise formal languages to describe natural phenomena; we want those languages digitally encoded and accessible from our PCs. Engineers and scientists are already comfortable using computers to access and manipulate the formal knowledge of their professions. Genetics, mentioned above, is the most obvious example. The Human Genome Project would have been unthinkable without the microcomputer. Chip design itself is leaving behind mechanical tasks like positioning wires and logic gates in favor of languages like Verilog and C++.

"Eventually you'll just be able to write in C++ (or C++ with hardware extensions) and convert it directly into a chip," says former 3dfx Senior Hardware Architect Wade Walker, adding that the technology still has a ways to go. The world of mechanical engineering is further behind but moving in the same direction with CAD/CAM and 3D printers. No one is going to "print" a working automobile anytime soon, but companies like Daimler Chrysler and BMW routinely print out prototypes of individual components. Designers at Adidas print prototypes of their shoe soles. Other companies prototype toys, dinnerware, bottles, golf clubs, jet skis, and so on. A future generation of the technology might print functional components, down the road replacing manufacturing with the process of printing many copies of a CAD file. In almost every industry, people are conspiring to represent their products as data that can be altered, tested, and transmitted rapidly.

Advertisement:

As the computer becomes the central tool for research and development, scientific knowledge takes on a new character. Like software, it becomes primarily functional rather than descriptive. During the age of the printing press -- which brought with it dictionaries, encyclopedias, tables, journals, proofs, and the modern community of scientists -- the project of science appeared to be the "understanding" or "description" of the natural world, which was conceived of as a clockwork set in motion by God. The engineer meanwhile peered into this vast, static field of knowledge and applied the insights that were useful for a particular problem. Now it seems that the project of science is not primarily to represent the natural world with language but to reconfigure the natural world as language, so that it can be composed, transformed, and manipulated in the ways our minds are equipped to operate upon knowledge itself.

Increasingly, an idea in one's head will map directly to a product on-screen. Just as our sexual perversions have been reduced to shorthand notations such as "shemale posers," "older spreads," and "a peeing blonde," which lead to immediate gratification, so will our other desires be expressible in tiny coded epithets, icons, and services. Consumerism got a bad rap in the 20th century because our choices were so limited. We all ended up driving the same SUVs and thinking that plaid Bermuda shorts are a neat idea. In a few decades, consumerism might well mature into a fine art of self-expression. The souls of our yuppies might yet be saved.

In the near future, perhaps we will all use software like CADTERNS Pro to custom-design the clothes we wear -- or we will go to online archives of millions of designs, pick what we want, and order a hard copy of the clothes from a print shop. Punk-ass teenagers will still choose to wear T-shirts from corporation-created rock bands, but conceivably they will have the option of downloading a file and modifying the imagery, the cut, the material. Even our mass-media-derived identities will acquire a personal flair. It will be the Age of Fashion, not because image-makers will rule the market, but because we will all be able to communicate our identities more exactly with customizable products.

Advertisement:

While writing represents language, the computer embodies language functions, and thanks to the popularity of the Internet these functions are rapidly being integrated with economic production. The power of our voices to reshape materials to suit our pleasure will soon be limited only by our salaries.

But as advanced language-processing technology frees us as consumers, will it also make us free in more fundamental ways, as citizens, artists, parents, employees? Or will its functional nature -- and, by extension, its users -- be seen as a danger that needs to be regulated? The blueprint of a nuclear bomb is a dangerous thing, but without a nuclear reactor, easily visible in a satellite photo, the blueprint is just a blueprint. A DNA sequence that can be synthesized with the touch of a key is an entirely different matter. When knowledge itself becomes the immediate danger, we may not be so eager to let it operate freely.

Just like the spread of printing in the 18th century, the spread of computation will have an impact that is not just economic but also social, political, and ideological. Historians remember the 18th century as the Age of Enlightenment, a time of radical ideas, increasing freedom, and the rise of "the empire of public opinion," which gradually acquired a power sufficient to topple monarchies and lay the foundations of modern democracy. Some contemporary thinkers hail the computer as the next great leap forward for expression, freedom, and democracy. The computer is seen in quantitative terms, as a large dose of the exact ideas brought about by the print revolution three centuries ago.

The possibility exists, however, that computers and computer networks are creating a new situation entirely, one that may or may not be friendly to the old ideology. The current trend of opinion may in fact be cutting in the opposite direction. If we look carefully at what leading scientists, judges, lawmakers, and academics have been saying during the last few years, we might be surprised. We should ask whether these fragmentary opinions, reinforced by changing technology, might in fact collect into a coherent ideology, one that drifts away from the ideals of liberty and equality developed during the last great advance in communication.

Advertisement:

In crude terms, governments are deciding what to do about networks. Since the rise and fall of Napster, everyone seems to have a theory about what to do about piracy on the Internet, but piracy is the smallest of the threats waiting for us in the digital age. The real danger is the spread of dangerous technologies.

The creative capacity of every industry is migrating into software. Combine that capacity with the Internet, and you have individuals all over the planet empowered to wreak all manner of unpredictable mischief. "The Gene Construction Kit" would have been a surreal title for a software application twenty years ago; today we can easily imagine "The Genome Construction Kit." Authorities around the world have been powerless to stop the spread of DeCSS, a program that circumvents DVD encryption, despite continued successful litigation against hackers. If someone posted a new Ebola-AIDS genome, the likelihood is that authorities would be equally helpless to stop its dissemination among terrorists. This approaching reality leaves us with a basic question about how to monitor information networks and protect ourselves from knowledge-enabled attacks.

Science-fiction writer David Brin, writing about surveillance technology for Wired in 1996, formulates the alternative futures well as a "tale of two cities" -- a choice between two ways of policing a city. The choice applies equally well to the Internet.

His point of departure is the surveillance craze that has swept the U.K. Beginning with King's Lynn, where crime in "trouble spots" dropped 98.6 percent when 60 remote-controlled video cameras were installed, constabularies all over the U.K. have rushed to duplicate the King's Lynn miracle. By the year 2000, over 1 million closed circuit cameras were operating in the U.K. The trend has been slower in North America but unmistakable. In 2000, the New York Police Department had grown its surveillance effort to over 1,000 security cameras in public places such as parks, subway stations and public housing, up from a few hundred in 1998. The public surveillance issue offers a timely thought-experiment about how a society should manage its own self-knowledge.

Advertisement:

In one hypothetical city, only the authorities have access to the cameras. The network is centralized, secret, and therefore vulnerable to abuses by government employees. Criminals are intimidated, but "[c]itizens walk the streets aware that any word or deed may be noted by agents of some mysterious bureau." In the other city, the cameras can be accessed by any citizen with a "wristwatch/TV" -- or, presumably, any device connected to the Internet. The network can be used by a parent whose child has wandered off, a person walking home alone at night, or, broadly speaking, a society that wants to make sure that police show a "minute attention to ritual and rights" when apprehending a suspect.

If we must submit to a surveillance society, I think it is clear that an open network, in which no group, agency, or individual is privileged over any other, would lead to a society with a superior character than one in which the citizens remain separate from and observed by the government. Better for us all to be able to watch one another than for the "authorities" to monopolize this power and leave us with only the fear.

But, not surprisingly, the surveillance networks installed in the U.K., New York, and elsewhere more closely resemble the model of the first city. The most tangible threat we face in our daily lives is each other, and increasingly, especially since Sept. 11, we are calling for top-down solutions for keeping each other in check.

A similar trend has appeared in proposed solutions to high-tech terrorist threats. Advances in biotech, chemistry, and other fields are expanding the power of individuals to cause harm, and this has many people worried. Glenn E. Schweitzer and Carole C. Dorsch, writing for The Futurist, gave this warning in 1999: "Technological advances threaten to outdo anything terrorists have done before; superterrorism has the potential to eradicate civilization as we know it." Schweitzer and Dorsch are so alarmed that they go on to say, "Civil liberties are important for a democratic society; the time has arrived, however, to reconfigure some aspects of democracy, given the violence that is on the doorstep."

Advertisement:

The Sept. 11 attacks have obviously added credence to their opinions. In 1999, they recommended an expanded role for the CIA, "greater government intervention" in Americans' lives, and the "honorable deed" of "whistle-blowing" -- proposals that went from fringe ideas to policy options and talk-show banter in less than a year. Taken together, their proposals aim to gather information from companies and individuals and feed that information into government agencies. A network of cameras positioned on street corners would nicely complement their vision of America during the 21st century.

If after Sept. 11 and the anthrax scare these still sound like wacky Orwellian ideas to you, imagine how they will sound the day a terrorist opens a jar of Ebola-AIDS spores on Capitol Hill. As Sun Microsystems' chief scientist, Bill Joy, warned: "We have yet to come to terms with the fact that the most compelling 21st-century technologies -- robotics, genetic engineering, and nanotechnology -- pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once -- but one bot can become many, and quickly get out of control."

Joy calls the new threats "knowledge-enabled mass destruction." To cause great harm to millions of people, an extreme person will need only dangerous knowledge, which itself will move through the biosphere, encoded as matter, and flit from place to place as easily as dangerous ideas now travel between our minds. In the information age, dangerous knowledge can be copied and disseminated at light speed, and it threatens everyone. Therefore, Joy's perfectly reasonable conclusion is that we should relinquish "certain kinds of knowledge." He says that it is time to reconsider the open, unrestrained pursuit of knowledge that has been the foundation of science for 300 years.

"[D]espite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs."

Joy proposes a system of verification that echoes the ideas of Schweitzer and Dorsch, one that embodies the information-gathering ideal of "transparency," "a verification regime similar to that for biological weapons, but on an unprecedented scale." As knowledge gains power to inflict damage on the world, we as a society may be compelled to control its development and dissemination. As a result, we may have to "reconfigure" our basic attitude toward freedom of speech, privacy, freedom of association -- those ideals which to some are the foundation of democracy.

This is the kind of future depicted, dystopically, in the game world I write for (Deus Ex), in which rampant terrorist activity, including a nanotech plague, spurs the United Nations to create a global intelligence agency that has wide latitude to interfere militarily in the affairs of sovereign nations. Spying, surveillance and intimidation of the populace are the modus operandi, and secrets are the currency of power. A shadowy organization finds that its secret knowledge of how to cure the plague gives it enough power to blackmail national governments. Hegemony in human affairs has rapidly fallen to those ruthless enough to pursue the "certain kinds of knowledge" that Bill Joy suggests we relinquish.

Does the future have to be this ugly? Can we safely assume that the world of Deus Ex was exaggerated for dramatic effect?

We cannot doubt that new categories of knowledge will be criminalized and that governments will seek ways to monitor and limit dissemination. Though industries will attempt to contain self-replicating threats on their own -- both with hardware and laboratory security -- the threats will never entirely go away. Consumer-level nanotech "assemblers" will probably be limited to constructing industry-approved products, DNA synthesizing equipment might be relegated to secure laboratories, improved encryption might even make it statistically impossible to copy music and movies, but though many engineers think that such measures will be sufficient, corporations aren't intelligence agencies. At any time, a leak due to human error or deception could compromise a whole class of safeguards, just as poor security in the Xing Technology DVD player enabled hackers to construct DeCSS, the software program capable of "ripping" DVDs.

The case of DVD piracy shows how the legal system might respond to shortcomings in private-sector security. In August of 2000, under the Digital Millennium Copyright Act (DMCA), the Motion Picture Association of America (MPAA) successfully prosecuted the hacker magazine 2600 for posting DeCSS on their Web site, www.2600.com. In addition, they won a court order that forbids 2600 "from linking their site to others that make DeCSS available." 2600 was not convicted of developing the tool, which was written by a 15-year-old Norwegian named Jon Johansen, nor were they convicted of using the tool. 2600 editor Emmanuel Goldstein has stated publicly, "None of us even HAS a DVD player." They were prosecuted under the DMCA provision which states that "no person shall ... offer to the public, provide or otherwise traffic in any technology ... primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under [the Copyright Act]." The congressional legislation very clearly forbids trafficking in the technology, which U.S. District Judge Lewis A. Kaplan says includes providing a URL with "a desire to bring about the dissemination" of infringing technology. Goldstein responded, "We can all laugh at such words but they represent something very sinister. We are now expected to believe that telling someone how to get a file with a link is the same as offering it yourself."

But we should not be surprised. The DMCA and this ruling are only the leading edge of the fight against self-replicating threats. The fact that the legal system has used such strong information-control measures to stop the relatively innocuous threat of DVD piracy indicates that similar measures will be used against the nanotech and biotech threats described by Bill Joy. "Knowledge-enabled" is the key phrase Joy uses to describe the threats; it means that to fight them governments might have to get into the business of controlling the flow of knowledge, as Kaplan did by enjoining 2600 from linking to DeCSS sites.

The legal line between speech and action will blur dramatically during this century. The new technologies, from nanotechnology to the online economy, will be created and implemented with computer language, which by nature is both "expressive" and "functional." How the courts untangle these two aspects of "code" will define 21st century attitudes toward new ideas and their regulation. Even Kaplan acknowledges that, legally, code must be treated as speech: "It cannot seriously be argued that any form of computer code may be regulated without reference to First Amendment doctrine. The path from idea to human language to source code to object code is a continuum." What he painstakingly argues, however, is that in contrast to the "expressive" component, protected by the First Amendment, the "functional" component of computer code can be regulated by government. "Computer code is not purely expressive any more than the assassination of a political figure is purely a political statement," he writes.

The inevitable concern is that free speech issues will become hazy when computer code is the central medium of expression for commerce, science, and technology. If any individual can code nanobot machinery or an Ebola-AIDS virus, then it won't be enough to e-mail your friends and say, "Watch out for an email called 'ILOVEYOU.'" People will want safeguards. When everyone has access to formal languages that define material processes, then all of our voices will (potentially) have functional components, and maybe they will have to be regulated. We will all have the magical power to bring novel material structures into being simply by defining them on our computer screens, and perhaps, Harry Potter notwithstanding, a society of wizards will fail to coexist with modern democratic institutions.

The question we need to ask is whether a tightly regulated society would really be more secure than an open one. If so, then maybe there is some merit to "reconfiguring" our openness. However, if the benefits of a closed society are not dramatically apparent, then we would be fools to scale back our civil liberties, because, once lost, they would be very difficult to recover.

New rules for a society of wizards are being proposed and implemented every day. In recent times, a leading technologist has called for us to reconsider "the open, unrestrained pursuit of knowledge," a federal judge has ordered an injunction against hyperlinking with "a desire" to "disseminate," and the NYPD has upgraded its surveillance network. These changes reflect the increased power of individuals -- and the need of governments to keep individual power in check. Ironically, though increases in individual power during the print revolution catalyzed the ideals of freedom and individuality, corresponding increases in individual power during the computer revolution have catalyzed a sense of doom and a desire for autocratic rule.

As the "functional" language made possible by computers increases in scope and power, we will want to police its misuse. The dangers are too great to ignore. However, we do have a choice analogous to the one David Brin gives for surveillance technology. We can attempt to implement a centralized, top-down apparatus to monitor and manage what is said (or "coded") -- or we can use a more open system. The government can try to bottle up dangerous information, or it can yield to the disseminative power of the Web and instead help companies and individuals defend themselves against harmful technology. For practical reasons, the government may have to yield to the Web, but is there any logic in abandoning a preemptive defense? It is a small matter to grant DVD pirates a brief holiday while the movie industry works out a new standard; it is an entirely different matter to let a reckless undergraduate post the molecular structure of a flesh-eating nanobot.

I think the reality is that preventing dissemination on the Web is like exterminating cockroaches. We could put centuries of effort into the problem, but hackers will still scurry out from under the cookie jar. We will find out if that theory is correct during this decade, during which the governments and corporations of the world (at least in the West) will try to stop online piracy. Clearly, if a society can't stop teenagers from spreading the word on how to "rip" DVDs or trade MP3s, then that same society will be hard-pressed to stop terrorists from spreading Ebola-AIDS or the latest self-replicating nanobot.

We might just have to accept that dangerous technologies are coming. Joy recommends "relinquishment" of certain lines of research, but even if the governments and corporations of the world could agree to swallow such a bitter economic pill, the kooks and hackers would continue chipping away at the unknown and publishing their findings on the Internet. Understanding will move forward, one way or another. Getting through this century may be a crapshoot for the human species (Joy cites Ray Kurzweil's "optimistic" prediction that we only have a "better than even chance") but sometimes you just have to roll the dice and hope for the best.

The question is: What sort of society is most likely to make it? If a police bureaucracy could really muzzle 15-year-old Norwegian hackers like Jon Johansen, then maybe it would outlast the alternative. But what happens if one clever kook slips through the cracks? What happens, in a police bureaucracy, if someone releases a nanotech plague into the environment? If the police can suppress information on the structure of the nanobots, then only a handful of government bureaus and hand-picked researchers may be allowed to work on a cure. Millions could die waiting for the bureaucracy to solve the problem. On the other hand, if the molecular structure of the pest is published worldwide, anyone with the expertise could help design defensive technology.

This kind of model has worked well in the fight against computer viruses and worms. The worm that Robert Morris unleashed in 1988, which shut down close to 10 percent of all networked machines, was decompiled within 12 hours, thanks to a spontaneous and collaborative effort between scientists at MIT, Berkeley, Purdue, and other universities. Berkeley released a full patch for Berkeley UNIX 15 hours after Morris released the worm. As an undergraduate that year, I casually came across a complete explication of how the worm worked, including the source code, posted to a public FTP site on machines at the MIT AI lab. The result of this open distribution of dangerous knowledge wasn't a meltdown of the Internet but improved security around the world. The Net got hit by an epidemic, some machines went down, but the system rebounded within days.

In his ruling against 2600, Kaplan compared DeCSS to a "propagated outbreak epidemic," an epidemic in which the disease is contagious, in contrast to a poisoned well, a "common source epidemic" which can be halted at the source. The analogy is apt. The self-replicating threats of the 21st century will propagate from location to location the way computer viruses now hop from computer to computer. The difference will be that instead of wiped hard drives we might have sick and dying people. A couple years ago, for instance, I would have lost my father to the "ILOVEYOU" virus. Perhaps we all can think of friends and family members who would have departed this world if computer viruses had real-world nanotech components.

A grim future indeed, but I am cautiously optimistic for a couple of reasons. First of all, most people in the world, despite their differences, want stable, healthy lives. As we have seen with the Internet, .1 percent of the population may always try to throw a wrench into the machine, but the rest of us will scramble to fix the problem, punish the pranksters, and defend against wrench-throwing in the future. Second, I think that even among the pranksters only a very few will cross over from fun-and-games with computers to deadly real-world viruses. At the worst, we face a few crazies and, more seriously, a handful of "rogue" nations and terrorist groups.

But it only takes one crazy, in theory, to invent a new disease. We might ask whether any society at all, free or totalitarian, could reverse-engineer a viral or nanotech pathogen fast enough to create a "cure" before the population is decimated. The example of nuclear weapons shows us clearly that a technology can be easy to deploy offensively but nearly impossible to defend against. This may be the situation with self-replicating threats, in which case we are doomed. Even so, our best chance of survival lies not in criminalizing certain kinds of expertise or knowledge but in disseminating that knowledge as widely as possible, so that any attack will be met by the widest possible resistance, a citizens' militia of knowledge workers, rather than a handful of cronies in an intelligence agency.

Classified knowledge creates divisions and hierarchies. In Deus Ex, the classifications in a U.N. intelligence agency run parallel to the levels within a secret society and take their nomenclature from the angelic ranks. The player moves through a world patterned by the natural law doctrine of Thomas Aquinas, in which the average citizen must accept the divine right of informed rulers, who through their access to information remain closest to the mind of Bureaucracy and therefore to the source of legitimate power.

Though we might be foolish to put too much faith in the romantic notion of the "citizens' militia," we should be very suspicious of laws that limit the creation or dissemination of knowledge. They threaten to create a privileged class of information shepherds who, though well-meaning at first, could easily abuse their dramatic power advantage over information consumers. We should not give up our freedom to know and to communicate unless we are certain that the new order would be vastly more secure than the present one -- and, as I argue above, the likelihood is that it would not.

- - - - - - - - - - - -

Knowledge hierarchies exist in every society and will continue to exist. The better educated will always rule the less educated. Modern democracies rely on public education to make this division somewhat permeable, and such systems are probably our best hope for preserving "liberty" for the average citizen. As the essential qualities of knowledge change, however, so will education. The fundamental threat lurking behind any reevaluation of knowledge is much broader than a potentially corrupt intelligence agency. The threat is that public education might become increasingly circumscribed, strengthening class barriers rather than softening them.

Instead of regulating access to the engineering languages of the coming century, we should teach the languages to anyone who wants to learn. We already see a dramatic economic divide between the computer literate and illiterate, a divide that will only widen as more products and technologies are developed on-screen. To say that learning to use a computer is like learning to use a new language is a metaphor but also more than a metaphor. Building anything with a computer -- from software to circuits to research papers -- requires the composition of logical and symbolic elements, abstracted through software. Some applications more closely resemble human language than others, but to be mastered they all require the same symbol-processing ability.

This is why we should be concerned about revisionist educators like William Crossman, who believes that we should forego teaching the masses written language and instead give them access to the world's knowledge through the magic of VIVO (voice-in, voice-out) computers. In his book, "Compspeak 2050: How Talking Computers Will Recreate Oral Culture by Mid-21st Century," he predicts that all language functions other than speaking and listening will be handled by VIVO computers by 2050.

"Like most technologies, written language will serve its function until some better technology comes along to replace it," he writes. He believes that oral-aural communication is more efficient than writing and reading, and this is the reason, he says, that the average person prefers the telephone over the letter and the TV over the book. "In the 21st Century, people with access to VIVO-computer technology will once again be able to use spoken language to access all stored information. Talking computers are going to make writing, reading, spelling, alphabets, punctuation, written numerals, music notation, and all other notational systems obsolete."

As a long-time user of voice-recognition, my patience beaten down by the number of times every program I have ever used has confused "of" with "up," or "was" with "wasn't," or "their" with "there," I am certain that his expectations for the technology lean toward the optimistic. Nevertheless, even if recognition algorithms are perfected, we won't ever abandon notational systems. They are essential for organizing any large project. Today's electronic entertainment certainly contains many "oral-aural" elements, but without exception it is created with writing, editing, storyboarding, programming, and other notational systems.

This will continue to be the case, even if the mass of consumers becomes illiterate. In Deus Ex, conceivably, if we had tweaked one or two minor features, we could have released a game that required no reading ability whatsoever; a player could have walked around the 3-D world, listened to newspapers being read aloud, watched his character enter door-codes automatically, and so on. The average consumer might well have welcomed a game interface that required no reading. But it would have been impossible to make the game without one or more scripting languages. Even the dialogue, written in a proprietary "conversation editor," was inseparable from flag-checks, flag-setting, definition of camera angles, and other game logic, all integrated with the speech of the characters. Without explicit notation for game-state, this logic would have been opaque to the developers. A VIVO computer might have been able to convert the various "if-then" verbal constructions of the six designers, three programmers, and three writers into some sort of internal format, but without a common table of flags, defined at least with pictographs, the workflow would have been chaotic and debugging would have been futile. The situation would not be so different if we were stacking shots of a movie script to hand off to 100 CGI artists.

In other words, the average consumer of entertainment can easily be excused from learning to read, but those in the business of producing it will always need a chirographic means of managing assets, workflow, and logic. In a similar vein, the average consumer of products can be excused from learning to read. Within a few decades, VIVO software could conceivably handle a consumer's checkbook, taxes, VCR programming, warranty claims; etc. The average high-school educated person is only required to read in a handful of situations as it is. Those annoying New Patient forms we have to fill out at doctor's offices? They could be eliminated, for all of us, with card-swipe technology that has existed since the '80s. Crossman is right: no one will need to read in a few decades, if we engineer the right software. However, the developers of products will require the same written skills as the developers of entertainment, and businessmen are not likely to abandon written contracts any time soon, even if they begin accepting voiceprint signatures. Home-buyers of today often let their eyes skip over the myriad fine-print details of mortgage contracts, and willingly yield to the verbal explanations of the realtor sitting at their elbow, but bank officials will always pore over every phrase with great care.

So, excusing the next generation from learning to read and write will succeed only in sharpening the divide between rich and poor, producer and consumer. If we return to the question of how to safeguard against self-replicating threats, however, maybe Crossman is on to something. Mass illiteracy would reduce the bulk of humanity to a herd of wait-staff and bus drivers, who would be easy to police by conventional means. Resources for truly transparent surveillance could be concentrated on the minority who receive a "dangerous" education. The Jon Johansens of tomorrow would be much easier to spot and guard against. We would have a clearly defined elite, not unlike the Party in "1984," and they would have to meet rigorous ideological and behavioral standards in order to keep their privileges.

Sound like a fanciful projection? This is David Brin's "City Number One" re-imagined with a broader set of technologies. We do in fact face a choice of two cities, and the one I just described is not so different than any future we might imagine in which our fundamental attitude toward knowledge has changed. Without a doubt, it would be the safest solution to problems being considered by Bill Joy, the U.S. government, worried citizens like Schweitzer and Dorsch, Judge Kaplan, and others. Limit education and you limit dissemination. If language will soon acquire the ability to script and compose matter, then limit access to language technologies and you nip high-tech terrorists in the bud.

Universal education has been a brief experiment in the English-speaking world, and there is no reason to think it will stick around forever. Free education for all children became available in England only in 1899. U.S. children have had that luxury only since the 20th century, and we shouldn't forget that even the Enlightenment thinkers who laid the foundations of democratic ideology had a sweeping contempt for the "canaille," or "common people," and were very pessimistic about the prospects of educating them. If we allow our basic attitude toward knowledge to shift -- if we get in the business of criminalizing, censoring, monitoring, and limiting various kinds of knowledge -- I believe we will very quickly slip away from the ideals of universal education, open scientific enquiry, entrepreneurism, equality of opportunity, and the fecundity of creative effort that has made Western democracies so strong during the past two centuries.

The self-replicating, scriptable technologies are here and still arriving. Progress will continue. We aren't choosing whether or not to eat from the Tree of Knowledge. We are deciding whether to put a fence around it and ration the fruit. The choice is not between a perilous freedom and a secure tyranny, but rather between fear and trust. We might even find it easier to trust one another if each of us takes a bite out of that genetically engineered FLAVR SAVR tomato and gains the same knowledge of good and evil.