I still remember the moment I first discovered the World Wide Web. It was in 1994 or 1995, at a row of desks that held the handful of computer terminals then available in the university library of the London School of Economics. On the Windows 3.1 desktop (so old there wasn’t even a Start button) I saw an unfamiliar icon—a crude, pixellated picture of a string instrument, with “Cello WWW” below it. I clicked.

www.archive.org The LSE home page from Dec. 31, 1996.

Cello was the first Windows-compatible browser, built by Thomas Bruce at the Cornell Law School to give lawyers access to the then-incipient web. It didn’t take long to figure out that I could click on the blue-underlined words to open other pages. Most of them led to information about the school, but there was one tantalizingly named just “The Internet.” I clicked.

www.archive.org This, or something very similar, was the first index of the web I ever saw.

That day a journey began. I started out by looking at the pages of other colleges and universities. If memory serves, on a page of a university physics department I found a link to NASA, where I got lost in a seemingly unlimited bounty of pictures of galaxies.

From there, I meandered into ever-more-distant backwaters, until I found myself reading a sort of online diary of a ham radio enthusiast somewhere in middle America. It was verbose, nerdy, inane, littered with exclamation marks and impenetrable radio jargon, semi-literate in places—and absolutely captivating. It was unfiltered access to the thoughts and experiences of a person thousands of miles away, of whose life I would otherwise have had no inkling and with whom I shared approximately nothing save for some subset of the English language—and he had exactly the same power to reach me as a US government agency with billions of dollars in funding. I finally forced myself to log off and left the library with my head buzzing with the potential of this new medium to democratize the world.

Techno-utopians were, of course, proclaiming the birth of a new democratic era at that very time. But it didn’t take me long to intuit that this playing field could not stay level for long. Those with the money to build more attractive web pages (never mind search-engine optimization, ad-targeting, social marketing and other attention-getting techniques as yet undreamt-of) would surely start to draw more eyeballs and influence to themselves than ordinary folk like my ham radio nut. In the digital world as in the real one, power would end up pooling around the already powerful.

Security consultant Bruce Schneier has now written a very good, clear thumbnail sketch, entitled “The battle for power on the internet,” of how the relationship between technology and power has evolved over the past couple of decades and will evolve in the next few. And while it may read as if he’s just making the typical liberal case for more openness and transparency online, there’s a twist. Schneier argues for these things not in the hope of coming closer to the ultimate participative democracy that the dreamers of 20 years ago prophesied, but more as a regulatory valve, a way of keeping the inevitable, endless dance of power that governs human history from spinning out of control.

The place of technology in history

This dance of power, Schneier points out, is governed by a basic principle: New technology benefits the nimble first, but is appropriated by the powerful later:

The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: They can make use of new technologies very quickly. And when those groups discovered the internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the internet, they had more power to magnify.

Right now, Schneier says, we are at the stage of increasing concentration of power in the hands of governments and large corporations. He attributes this to what he calls a “feudal” model. Peasants used to put themselves at the service of feudal lords in exchange for convenience and protection. Today, we entrust control of our personal data to Google and Facebook and in return they handle the technology and security for us.

In parallel, writes Schneier, both “totalitarian” and “democratic” governments have accrued a great deal of technological power over people. Their interests sometimes align with one another’s, and with those of corporations, and sometimes don’t. When they don’t, ordinary people can get trampled in the conflict.

The effect of accelerating technological change

I take issue with Schneier’s use of the word “feudal,” because it implies that we should see today’s technopolitics as something bad—a throwback to a pre-Enlightenment time. In fact, as he would probably be the first to admit, the relationship between people and their rulers—be they feudal lords, tyrant emperors, elected prime ministers, bank presidents or tech CEOs—has always involved an exchange of power for protection. The conflicts between those rulers have always created risks for ordinary people. (As Schneier also fails to point out, they create opportunities too, when one powerful ruler undermines another.) And the steady advance of technology has always created what Schneier calls a “security gap” between the nimble outliers who adopt a technology first and the established interests that appropriate it later.

But that’s an aside. Schneier’s key argument, as I understand it at least, can be boiled down to this:

As technological progress speeds up, this “security gap” between first adoption and establishment appropriation of technology grows.

As the security gap grows, so does the ability of the first adopters—criminals, rebels, dissidents, and disruptors in both the political and the business spheres—to threaten the institutions of power.

As the threat to institutions grows, so does the extent of the measures they take to defend themselves from threats, and the harshness of their responses to them.

As that defensiveness increases, so does the potential harm to the ordinary people caught in the middle of it all, who have no particular alliance with either the entrenchment of institutional power or the disruption of it.

What does this lead to, asks Schneier—an ever more repressive police state, or an anarchic breakdown into a Hobbesian society? I find it telling that our science fiction tends to depict either one or the other these futures—and if it’s the police state, the plot is usually about the hero who blows it up, with the movie ending conveniently before we learn whether the long-term result was utopia or chaos.

Answering his own question, Schneier says that the future is probably neither of these extremes—but that the way to make sure of that is more transparency and public oversight of institutional power. These will “give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish.” The way to achieve this:

…we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws.

I think the point to understand here is this: Usually, the debate about privacy versus surveillance or transparency versus security is framed as an argument over what is the “right” balance between the two to achieve a just and stable society. Schneier is arguing something different: We need as much control over our own data and as much access to government data as possible, not as an endpoint in itself, but as the mechanism that allows that debate about society to be held. Or, to put it using a word very much of this era, fair access to data is a platform—the level surface on which people and rulers can dance the dance of power without falling flat on our faces.