The internet sees more than its fair share of antisocial behavior. The common wisdom is that computer-mediation strips out the social cues that regulate our face-to-face interactions. This is similar to classical theories of mob behavior. According to these theories, members of large groups lose their self-awareness in a sea of faceless others and, with it, their ability to regulate their own behavior. Under such conditions, they act on their basest impulses.

Despite decades of research, no one has found a significant link between these vague variables and actual behavior. [1] Rather, the key appears to lie in a combination of group membership and individual accountability. If online communities want to civilize the internet, it's not enough to make people use their real names. There must be meaningful consequences for misbehavior.

Arrangements of Social Consequences

Bernard Guerin has spent a decade studying the mechanisms that cause antisocial behavior in groups. By turning his microscope away from individuals and focusing instead on the situations that surround them, he has been able to illuminate an entire spectrum of social behavior. What's more, his model can provide concrete guidance for social software developers.

Group membership does strange things to people. Sometimes, it causes them to slack off, an effect that's known as "social loafing." Other times, it pushes them to new peaks of effort and performance; this is called "social facilitation." These phenomena are rarely studied alongside antisocial behavior, despite the fact that group membership plays a central role in all three.

Guerin argues that the key to understanding them lies in the arrangements of social consequences that define situations. This isn't just idle speculation, it's based on a thorough study of the scientific literature and a laundry list of his own experiments. Group membership interacts with individual accountability to mold people into rioters, free riders, and productive citizens.

In his most comprehensive experiment, Guerin asked students to brainstorm different uses for a brick. [2] ("Here's a brick. Come up with some things to do with it, as many as you can." Pretty simple.) Some students worked alone, while others were assigned to groups. However, no group discussion was allowed and each student made their own list. That's an important point: none of the students worked together, even the ones who had been assigned to a group. Half of the students in groups were asked to write their names on their individual lists, while the other half wrote down only their group name. (Half of the lone students were anonymous, but it's doubtful that they believed they were truly unidentifiable.)

Everyone was lead to believe that they, or their group, would be evaluated on the total number of uses they devised. However, a panel of judges also rated the "negativity" of each contribution, based on how acceptable it would be in polite company. These ratings showed strong agreement between judges and were used as a measure of antisocial behavior.

The results: Anonymous group members generated the same number of uses as students who worked alone, on average, but a significantly higher percentage of those uses were judged antisocial by the panel. This should come as no surprise to anyone who has participated in an online community.

More interesting were the results for individually identifiable group members. Not only did they brainstorm the greatest number of uses, theirs were by far the least antisocial. Since no group discussion was allowed, the improved performance was solely a result of perceived group membership, not of group collaboration. Let me repeat that...

Under conditions of individual accountability, group membership leads to higher performance and less antisocial behavior.

In-Group Bias

Group membership does even stranger things to people when they're dealing with their out-group. (People who share a group with you are called your "in-group." Everyone else is considered your "out-group.") The psychology of in-group favoritism is pervasive. You can see it in sports fans, school rivalries, ethnic tensions, and the ages old war between IT and Marketing. It's absolutely everywhere.

Henri Tajfel's classic experiment demonstrates that people will happily form group identities for pretty much no reason at all. [3] In his Minimal Group Paradigm, study participants are randomly assigned to groups and told that they have something in common, often their score on a contrived and meaningless test. Then, they're asked to decide how a pool of tokens should be divided among the other participants, including members of both their in-group and their out-group.

The results: favoritism and bias, also for pretty much no reason at all. Tajfel found that participants would favor members of their in-groups even when they never interacted with them and had no expectation of interacting in the future. They had nothing to gain from short changing out-group members, but they did it anyway. Pretty bleak.

Now, the good news. Subsequent research has shown that in-group favoritism virtually disappears under conditions of accountability to the out-group. In Dobbs and Crano's study, this just meant telling participants that their token allocations would be recorded and shown to members of their out-group. [4] That's simple transparency, something the internet is capable of providing in spades.

Take, for instance, online communities that allow users to upload content, comment on other people's content, and rate both content and comments. Such sites can have problems with malicious comments, biased ratings, and low-quality submissions. The above research suggests that exposing each user's rating actions to the whole community could increase motivation to submit quality content, eliminate favoritism when rating that content, and reduce the frequency of antisocial comments.

The mix of group identity and individual anonymity that characterizes most online communities inevitably breeds the antisocial behaviors that have become synonymous with the internet. Social software designers can make a difference by building for transparency and making individuals accountable for their actions, but there may be a price to pay...

Creativity and Evaluation

Experiments by Szymanski and Harkins [5] have shown that the mere threat of evaluation can inhibit creativity. Following the standard procedure, they asked study participants to brainstorm uses for a brick, but they had judges rate the creativity of the results, not their social acceptability. Only half of their participants were told that this evaluation would take place.

The result was that participants who knew their work would be judged produced a greater number of less novel uses. They were more productive, but less creative. The irony is that these people knew their work would be judged on its quality, not just its quantity, yet they were still less creative.

The lesson for social software designers is that conformity and creativity don't often mix. Meaningful social consequences will inhibit antisocial behavior, but they may also inhibit abnormal behavior of any kind. That may be acceptable to social or professional groups unrelated to creative expression, but all groups have problems to solve. Inhibiting creativity will put novel solutions in short supply.

However, research by Cecilia Ridgeway [6] indicates that groups will tolerate deviant behavior from members who are perceived as competent and motivated. Social software that allows communities to recognize their top contributors may actually provide more leeway for abnormal behavior, but only to those who are least likely to abuse it. How's that for a silver lining?

Notes