Last spring, I wrote a little essay called "Digital Maoism" for the science debate Web site Edge.org , which is orchestrated by writer and literary agent John Brockman . The essay quickly took on a life of its own, spawning an ongoing series of commentary both in print and online. My central point in the original piece was that the current fad for aggregating the efforts of multitudes of people over the Internet is moving the Web in the wrong direction. The subsequent debates have touched on a number of important ideas that I would like to follow up on here.

Part of my argument admittedly focuses on highly personal values, such as my concern that collective online creations like Wikipedia have made the Web less expressive by absorbing the efforts of hordes of volunteer authors into an overly regularized scheme. I miss the challenging quirkiness of Web sites that have fallen into neglect since the rise of Wikipedia. It's a shame to see the Internet world increasingly diffracted by a single organizing principle, when the whole point of the Web for me is to experience the strangeness of other points of view. (I find blogs and other recent online fads both overly structured and too transient to replace the odd, revelatory worldviews laid bare in the original generation of Web pages.) Some people agree with me, some don't, and others cannot even understand why I am making any point at all about this. It's a matter of taste and perspective.

Other parts of my argument are more easily framed as scientific, or at least empirical, questions. An example is the problem of how to predict when the "wisdom of crowds" will work effectively. The term is best known as the title of a book by James Surowiecki and is often introduced with the story of an ox in a marketplace. In the story, a bunch of people all guess the animal's weight, and the average of their guesses turns out to be generally more reliable than any one person's estimate. A common idea about why this works is that the mistakes various people make cancel each other out; an additional, more important idea is that there's at least a little bit of correctness in the logic and assumptions underlying many of the guesses, so they center around the right answer. (This latter formulation emphasizes that individual intelligence is still at the core of the collective phenomenon.) At any rate, the effect is repeatable and is widely held to be one of the foundations of both market economies and democracies.

People have tried to tap into this collective wisdom in a variety of ways in recent years. There are experiments in using stock market–like systems, in which people bet on ideas to answer seemingly unanswerable questions, like when terrorist events will occur or when stem cell therapy will allow a person to grow new teeth. There is also an enormous amount of energy being put into aggregating the judgments of Internet users to create content, as in the collectively generated link Web site Digg .

Unfortunately, crowd dynamics are not always reliable. Markets have their tulip crazes, leading to bubbles and crashes, and crowds can turn into lynch mobs. Institutions that rely on crowds usually develop mechanisms to prevent such pathologies. Stock markets might adopt automatic trading shutoffs, for instance, which are triggered by overly abrupt shifts in price or trading volume. Wikipedia has had to put restrictions on how people edit entries in order to soften the level of chaos and conflict over controversial items.

The Net has for the most part delivered happy surprises about human group potential. For instance, the rise of the Web in the early 1990s took place without leaders, ideology, advertising, commerce, or anything else other than a positive sensibility shared by millions of people. Who would have thought that was possible? It stands to reason, however, that the Net can also accentuate negative patterns of behavior or even bring about unforeseen social pathology. Over the last century, new media technologies have often become prominent as components of massive outbreaks of organized violence. For example, the Nazi regime was a major pioneer of television and cinematic propaganda.

After a generation or so, people seem to become less affected by the power of a new electronic medium. Many people in the Muslim world have only recently gained access to satellite TV and the Internet; I wonder if that has something to do with the current wave of violent radicalism. I also worry about the next generation of kids around the world growing up with Internet-based technology that emphasizes aggregation, as is the current fad. Will they be more likely to turn into a mob when they come of age?

Since the Internet makes crowds more accessible, it would be beneficial to have a general and clear set of rules explaining when the wisdom of crowds is likely to produce meaningful results. Surowiecki proposes four principles in his book; in my essay, I came up with three. His rules are framed from the perspective of the interior dynamics of the crowd. For instance, he suggests there should be limits on the ability of members of the crowd to see how others are about to decide on a question, in order to preserve independence and avoid mob behavior.

My proposed rules are a little different from Surowiecki's in that they are framed more from the outside looking in at the crowd. For example, I would argue that a crowd shouldn't be allowed to frame its own questions and that the answers should never be more complicated than a single number or a single multiple-choice answer. I also propose that techniques usually associated with signal processing should be applied to crowds, like gradually changing how fast a crowd can act based on how it is performing.

Maybe if you combined our approaches you'd get the magic seven rules; maybe those could be compressed into a smaller number. Then again, maybe one or both of us are on the wrong track. The problem is that there's been inadequate testing of such ideas. Numerous projects have looked at how to improve specific markets and other crowd-wisdom systems, but too few projects have framed the question in more general terms or tested general hypotheses about how the systems work. What a rich area to study!

There is another potential pitfall of crowd wisdom: the ability of information technology to lock in cultural or behavioral patterns. Suppose you don't set the rules effectively in advance and the crowd acts in an ugly way. If the connections within the crowd are mediated by digital technology, then an engineering challenge stands in the way of fixing the problem.

Internet-based designs like Wikipedia and Digg are enjoying a period of open possibility right now, but that won't last forever. The Internet at present is analogous to that digital Eden of the early 1980s, when personal computers still seemed mercurial and infinitely mutable. Layers of digital design can become locked in place because other layers come to depend on them. The PC, for example, has become a standardized thing with windows, a mouse, a hard disk divided into files, and so on. Our computers could have come out quite differently; the Macintosh was originally conceived without files as we know them today, although it acquired them before it shipped. Today, ideas like files and windows have become so entrenched that they might as well be elementary particles. While those ideas are probably not terribly influential on human behavior, locked-in Internet-based designs could be decisively important.

That's why there is such passion from all sides about the battles over things like digital rights management, digital privacy, and Net neutrality: When life is digital, any battle might turn out to be the end of the war. If a more restrictive outcome gets locked in, it becomes profoundly difficult to reverse. We are spectacularly lucky that the people whose early experiments turned into the Internet conceived of an optimistic open design that happened to get locked in.

The legacy effect might eventually ossify aspects of the Net that are now fluid. There is an astonishing and widespread denial about this process in some corners of the software world, particularly in the trendier domains of open source. The situation might improve in the future, but that will require fundamental improvements in the way we do computer science. I hope it happens. In my professional life, it's the problem that I work on the hardest. I'll talk more about that in a future column.

There is a third empirical problem to tackle, and it is the least comfortable. To what degree is mob behavior an inborn element of human nature? There are competing clichés about human identity: that we naturally and inevitably form into competing packs or that we would refrain from doing so if only we had decent gang-free peer groups in our teens. These theories can actually be tested. The genetic aspects of behavior that have received the most attention (under rubrics like sociobiology or evolutionary psychology) have tended to focus on things like gender differences and mating strategies, but my guess is that clan orientation will turn out to be the most important area of study.

I hope that improved understanding of the problems I've mentioned will come about before the Net can contribute to any large-scale outbreak of bad human behavior. A better and more general model of when the wisdom of crowds functions and when it breaks down will help us avoid Web-based designs that elicit cruel or stupid mob behavior. A better technical approach to avoiding the lock-in effect will help us correct mistakes along those lines if they occur. And a clearer picture of the nastier side of our genetic legacy will help us design information systems to avoid triggering evil behavior.

We need such breakthroughs soon. Based on the ever-growing influence of the Net, my guess is that we have about 10 years to seek out the answers.