HOW FAST, HOW SMALL, AND HOW POWERFUL?

SETH LLOYD: Computation is pervading the sciences. I believe it began about 400 years ago, if you look at the first paragraph of Hobbes's famous book Leviathan. He says that just as we consider the human body to be like a machine, like a clock where you have sinews and muscles to move energy about, a pulse beat like a pendulum, and a heart that pumps energy in, similar to the way a weight supplies energy to a clock's pendulum, then we can consider the state to be analogous to the body, since the state has a prince at its head, people who form its individual portions, legislative bodies that form its organs, etc. In that case, Hobbes asked, couldn't we consider the state itself to have an artificial life?

To my knowledge that was the first use of the phrase artificial life in the form that we use it today. If we have a physical system that's evolving in a physical way, according to a set of rules, couldn't we consider it to be artificial and yet living? Hobbes wasn't talking about information processing explicitly, but the examples he used were, in fact, examples of information processing. He used the example of the clock as something that is designed to process information, as something that gives you information about time. Most pieces of the clock that he described are devices not only for transforming energy, but actually for providing information. For example, the pendulum gives you regular, temporal information. When he next discusses the state and imagines it having an artificial life, he first talks about the brain, the seat of the state's thought processes, and that analogy, in my mind, accomplishes two things.

First, Hobbes is implicitly interested in information. Second, he is constructing the fundamental metaphor of scientific and technological inquiry. When we think of a machine as possessing a kind of life in and of itself, and when we think of machines as doing the same kinds of things that we ourselves do, we are also thinking the corollary, that is, we are doing the same kinds of things that machines do. This metaphor, one of the most powerful of the Enlightenment, in some sense pervaded the popular culture of that time. Eventually, one could argue, that metaphor gave rise to Newton's notions of creating a dynamical picture of the world. The metaphor also gave rise to the great inquiries into thermodynamics and heat, which came 150 years later, and, in some ways, became the central mechanical metaphor that has informed all of science up to the 20th century.

The real question is, when did people first start talking about information in such terms that information processing rather than clockwork became the central metaphor for our times? Because until the 20th century, this Enlightenment mode of thinking of physical things such as mechanical objects with their own dynamics as being similar to the body or the state was really the central metaphor that informed much scientific and technological inquiry. People didn't start thinking about this mechanical metaphor until they began building machines, until they had some very good examples of machines, like clocks for instance. The 17th century was a fantastic century for clockmaking, and in fact, the 17th and 18th centuries were fantastic centuries for building machines, period.

Just as people began conceiving of the world using mechanical metaphors only when they had themselves built machines, people began to conceive of the world in terms of information and information-processing, only when they began dealing with information and information processing. All the mathematical and theoretical materials for thinking of the world in terms of information, including all the basic formulas, were available at the end of the 19th century, because all these basic formulas had been created by Maxwell, Boltzmann and Gibbs for statistical mechanics. The formula for information was known back in the 1880s, but people didn't know that it dealt with information. Instead, because they were familiar with things like heat and mechanical systems that processed heat, they called information in its mechanical or thermodynamic manifestation, entropy. It wasn't until the 1930s, when people like Claude Shannon and Norbert Wiener, and before them Harry Nyquist, started to think about information processing for the primary purpose of communication, or for the purposes of controlling systems so that the role of information and feedback could be controlled. Then came the notion of constructing machines that actually processed information. Babbage tried to construct one back in the early 19th century, which was a spectacular and expensive failure, and one which did not enter into the popular mainstream.

Another failure concerns the outgrowth of the wonderful work regarding Cybernetics in other fields such as control theory, back in the late 1950s, early 1960s, when there was this notion that cybernetics was going to solve all our problems and allow us to figure out how social systems work, etc. That was a colossal failure — not because that idea was necessarily wrong, but because the techniques for doing so didn't exist at that point — and, if we're realistic, may in fact never exist. The applications of Cybernetics that were spectacularly successful are not even called Cybernetics because they're so ingrained in technology, in fields like control theory, and in the aerospace techniques that were used to put men on the moon. Those were the great successes of Cybernetics, remarkable successes, but in a more narrow technological field.

This brings us to the Internet, which in some sense is almost like Anti Cybernetics, the evil twin of Cybernetics. The word Cybernetics comes from the Greek word kybernotos which means a governor — helmsman, actually, the kybernotos was the pilot of a ship. Cybernetics, as initially conceived, was about governing, or controlling, or guiding. The great thing about the Internet, as far as I'm concerned, is that it's completely out of control. In some sense the fact of the Internet goes way beyond and completely contradicts the Cybernetic ideal. But, in another sense — the way in which the Internet and cybernetics are related, Cybernetics was fundamentally on the right track. As far as I'm concerned what's really going on in the world is that there's a physical world where things happen. I'm a physicist by training and I was taught to think of the world in terms of energy, momentum, pressure, entropy. You've got all this energy, things are happening, things are pushing on other things, things are bouncing around.

But that's only half the story. The other half of the story, its complementary half, is the story about information. In one way you can think about what's going on in the world as energy, stuff moving around, bouncing off each other — that's the way people have thought about the world for over 400 years, since Galileo and Newton. But what was missing from that picture was what that stuff was doing: how, why, what? These are questions about information. What is going on? It's a question about information being processed. Thinking about the world in terms of information is complementary to thinking about it in terms of energy.

To my mind, that is where the action is, not just thinking about the world as information on its own, or as energy on its own, but looking at the confluence of information and energy and how they play off against each other. That's exactly what Cybernetics was about. Wiener, who is the real father of the field of Cybernetics, conceived of Cybernetics in terms of information, things like feedback control. How much information, for example, do you need to make something happen?

The first physicists studying these problems were scientists who happened to be physicists, and the first person who was clearly aware of the connection between information, entropy, and physical mechanics and energy like quanta was Maxwell. Maxwell, in the 1850s and 60s, was the first person to write down formulas that related what we would now call information — ideas of information — to things like energy and entropy. He was also the first person to make such an explicit connection.

He also had this wonderfully evocative far-out, William Gibsonesque notion of a demon. "Maxwell's Demon" is this hypothetical being that was able to look very closely at the molecules of gas whipping around in a room, and then rearrange them. Maxwell even came up with a model in which the demon was sitting at a partition, a tiny door, between two rooms and he could open and shut this door very rapidly. If he saw fast molecules coming from the right and slow molecules coming from the left, then he'd open the door and let the fast molecules go in the lefthand side, and let the slow molecules go into the righthand side.

And since Maxwell already knew about this connection between the average speed of molecules and entropy, and he also knew that entropy had something to do with the total number of configurations, the total number of states a system can have, he pointed out, that if the demon continues to do this, the stuff on the lefthand side will get hot, and the stuff on the righthand side will get cold, because the molecules over on the left are fast, and the molecules on the right are slow.

He also pointed out that there is something screwy about this because the demon is doing something that shouldn't take a lot of effort since the door can be as light as you want, the demon can be as small as you want, the amount of energy you use to open and shut the door can be as small as you desire, and yet somehow the demon is managing to make something hot on one side. Maxwell pointed out that this was in violation of all the laws of thermodynamics — in particular the second law of thermodynamics which says that if you've got a hot thing over here and a cold thing over there, then heat flows from the hot thing to the cold thing, and the hot thing gets cooler and the cold thing gets hotter, until eventually they end up the same temperature. And it never happens the opposite way. You never see something that's all the same temperature spontaneously. Maxwell pointed out that there was something funny going on, that there was this connection between entropy and this demon who was capable of processing information.

To put it all in perspective, as far as I can tell, the main thing that separates humanity from most other living things, is the way that we deal with information. Somewhere along the line we developed methods, sophisticated mechanisms, for communicating via speech. Somewhere along the line we developed natural language, which is a universal method for processing information. Anything that you can imagine is processed with information and anything that could be said, can be said using language.

That probably happened around a hundred thousand years ago, and since then, the history of human beings has been the development of ever more sophisticated ways of registering, processing, transforming, and dealing with information. Society through this methodology creates an organizational formula that is totally wild compared with the organizational structures of most other species, which makes the human species distinctive, if there is something at all that makes us distinctive. In some sense we're just like any ordinary species out there. The extent to which we are different has to do with having more sophisticated methods for processing information.

Something else has happened with computers. What's happened with society right now is that we have created these devices, computers, which already can register and process huge amounts of information, which is a significant fraction of the amount of information that human beings themselves, as a species, can process. When I think of all the information being processed there, all the information being communicated back and forth over the Internet, or even just all the information that you and I can communicate back and forth by talking, I start to look at the total amount of information being processed by human beings — and their artifacts — we are at a very interesting point of human history, which is at the stage where our artifacts will soon be processing more information than we physically will be able to process. So I have to ask, how many bits am I processing per second in my head? I could estimate it, it's going to be around ten billion neurons, something like 10 to the 15 bits per second, around a million billion bits per second.

Hell if I know what it all means — we're going to find out. That's the great thing. We're going to be around to find out some of what this means. If you think that information processing is where the action is, it may mean in fact that human beings are not going to be where the action is anymore. On the other hand, given that we are the people who created the devices that are doing this mass of information processing, we, as a species, are uniquely poised to make our lives interesting and fun in completely unforeseen ways.

Every physical system, just by existing, can register information. And every physical system, just by evolving according to its own peculiar dynamics, can process that information. I'm interested in how the world registers information and how it processes it. Of course, one way of thinking about all of life and civilization is as being about how the world registers and processes information. Certainly that's what sex is about; that's what history is about. But since I'm a scientist who deals with the physics of how things process information, I'm actually interested in that notion in a more specific way. I want to figure out not only how the world processes information, but how much information it's processing. I've recently been working on methods to assign numerical values to how much information is being processed, just by ordinary physical dynamics. This is very exciting for me, because I've been working in this field for a long time trying to come up with mathematical techniques for characterizing how things process information, and how much information they're processing.

About a year or two ago, I got the idea of asking the question, given the fundamental limits on how the world is put together — (1) the speed of light, which limits how fast information can get from one place to another, (2) Planck's constant, which tells you what the quantum scale is, how small things can actually get before they disappear altogether, and finally (3) the last fundamental constant of nature, which is the gravitational constant, which essentially tells you how large things can get before they collapse on themselves — how much information can possibly be processed. It turned out that the difficult part of this question was thinking it up in the first place. Once I'd managed to pose the question, it only took me six months to a year to figure out how to answer it, because the basic physics involved was pretty straightforward. It involved quantum mechanics, gravitation, perhaps a bit of quantum gravity thrown in, but not enough to make things too difficult.

The other motivation for trying to answer this question was to analyze Moore's Law. Many of our society's prized objects are the products of this remarkable law of miniaturization — people have been gettingextremely good at making the components of systems extremely small. This is what's behind the incredible increase in the power of computers, what's behind the amazing increase in information technology and communications, such as the Internet, and it's what's behind pretty much every advance in technology you can possibly think of — including fields like material science. I like to think of this as the most colossal land grab that's ever been done in the history of mankind.

From an engineering perspective, there are two ways to make something bigger: One is to make it physically bigger, (and human beings spent a lot of time making things physically bigger, working out ways to deliver more power to systems, working out ways to actually build bigger buildings, working out ways to expand territory, working out ways to invade other cultures and take over their territory, etc.) But there's another way to make things bigger, and that's to make things smaller. Because the real size of a system is not how big it actually is, the real size is the ratio between the biggest part of a system and the smallest part of a system. Or really the smallest part of a system that you can actually put to use in doing things. For instance, the reason that computers are so much more powerful today than they were ten years ago is that every year and a half or so, the basic components of computers, the basic wires, logic chips etc., have gone down in size by a factor of two. This is known as Moore's Law, which is just a historical fact about history of technology.

Every time something's size goes down by a factor of two, you can cram twice as many of them into a box, and so every two years or so, the power of computers doubles, and over the course of fifty years the power of computers has gone up by a factor of a million or more. The world has gotten a million times bigger because we've been able to make the smallest parts of the world a million times smaller. This makes this an exciting time to live in, but a reasonable question to ask is, where is all this going to end? Since Moore proposed it in the early 1960s, Moore's Law has been written off numerous times. It was written off in the early 1970s because people thought that fabrication techniques for integrated circuits were going to break down and you wouldn't be able to get things smaller than a scale size of ten microns.

Now Moore's Law is being written off again because people say that the insulating barriers between wires in computers are getting to be only a few atoms thick, and when you have an insulator that's only a few atoms thick then electrons can tunnel through them and it's not a very good insulator. Well, perhaps that will stop Moore's Law, but so far nothing has stopped it.

At some point Moore's Law has to stop? This question involves the ultimate physical limits to computation: you can't send signals faster than the speed of light, you can't make things smaller than the laws of quantum mechanics tell you that you can, and if you make things too big, then they just collapse into one giant black hole. As far as we know, it's impossible to fool Mother Nature.

I thought it would be interesting to see what the basic laws of physics said about how fast, how small, and how powerful, computers can get. Actually these two questions: given the laws of physics, how powerful can computers be; and where must Moore's Law eventually stop — turn out to be exactly the same, because they stop at the same place, which is where every available physical resource is used to perform computation. So every little subatomic particle, every ounce of energy, every photon in your system — everything is being devoted towards performing a computation. The question is, how much computation is that? So in order to investigate this, I thought that a reasonable form of comparison would be to look at what I call the ultimate laptop. Let's ask just how powerful this computer could be.

The idea here is that we can actually relate the laws of physics and the fundamental limits of computation to something that we are familiar with — something of human scale that has a mass of about a kilogram, like a nice laptop computer, and has about a liter in volume, because kilograms and liters are pretty good to hold in your lap, are a reasonable size to look at, and you can put it in your briefcase, et cetera. After working on this for nearly a year what I was able to show was that the laws of physics give absolute answers to how much information you could process with a kilogram of matter confined to a volume of one liter. Not only that, surprisingly, or perhaps not so surprisingly, the amount of information that can be processed, the number of bits that you could register in the computer, and the number of operations per second that you could perform on these bits are related to basic physical quantities, and to the aforementioned constants of nature, the speed of light, Planck's constant, and the gravitational constant. In particular you can show without much trouble that the number of ops per second — the number of basic logical operations per second that you can perform using a certain amount of matter is proportional to the energy of this matter.

For those readers who are technically-minded, it's not very difficult to whip out the famous formula E = MC2 and show, using work of Norm Margolus and Lev Levitin here in Boston that the total number of elementary logical operations that you can perform per second using a kilogram of matter is the amount of energy, MC2, times two, divided by H-bar Planck's constant, times pi. Well, you don't have to be Einstein to do the calculation; the mass is one kilogram, the speed of light is 3 times ten to the eighth meters per second, so MC2 is about ten to the 17th joules, quite a lot of energy (I believe it's roughly the amount of energy used by all the world's nuclear power plants in the course of a week or so), a lot of energy, but let's suppose you could use it to do a computation. So you've got ten to the 17th joules, and H-bar, the quantum scale, is ten to the minus 34 joules per second, roughly. So there you go. I have ten to the 17th joules, I divide by ten to the minus 34 joules-seconds, and I have the number of ops: ten to the 51 ops per second. So you can perform 10 to the 51 operations per second, and ten to the 51 is about a billion billion billion billion billion billion billion ops per second — a lot faster than conventional laptops. And this is the answer. You can't do any better than that, so far as the laws of physics are concerned.