Incentivizing Better Speech, Rather Than Censoring 'Bad' Speech

from the there-are-other-solutions dept

This has gone on for a while, but in the last year especially, the complaints about "bad" speech online have gotten louder and louder. While we have serious concerns with the idea so-called "hate speech" should be illegal -- in large part because any such laws are almost inevitably used against those the government wishes to silence -- that doesn't mean that we condone and support speech designed to intimidate, harass or abuse people. We recognize that some speech can, indeed, create negative outcomes, and even chill the speech of others. However, we're increasingly concerned that people think the only possible way to respond to such speech is through outright censorship (often to the point of requiring online services, like Facebook and Twitter to silence any speech that is deemed "bad").

As we've discussed before, we believe that there are alternatives. Sometimes that involves counterspeech -- including a wide spectrum of ideas from making jokes, to community shaming, to simple point-for-point factual refutation. But that's on the community side. On the platform side -- for some reason -- many people seem to think there are only two options: censorship or free for all. That's simply not true, and focusing on just those two solutions (neither of which tend to be that effective) shows a real failure of imagination, and often leads to unproductive conversations.

Thankfully, some people are finally starting to think through the larger spectrum of possibilities. On the "fake news" front, we've seen more and more suggestions that the best "pro-speech" way to deal with such things is with more speech as well (though there are at least some concerns about how effective this can be). Over at Quartz, reporter Karen Hao recently put together a nice article about how some platforms are thinking about this from a design perspective... and uses Techdirt as one example, in how we've created small incentives in our comment system for better comments. The system is far from perfect, and we certainly don't suggest that every comment we receive is fantastic. But I think that we do a pretty good job of having generally good discussions in our comments that are interesting to read. Certainly a lot more interesting than other sites.

The article also discusses how Medium has experimented with different design ideas to encourage more thoughtful comments as well, and quotes professor Susan Benesch (who we've mentioned many times in the past), discussing some other creative efforts to encourage better conversations online, including Parlio (which sadly was shut down after being purchased by Quora) and League of Legends -- which used some feedback loops to deal with abusive behavior:

In one experiment, Lin measured the impact of giving players who engaged in toxic behavior specific feedback. Previously, if a player received a suspension for making racist, homophobic, sexist, or harassing comments, they were given an error message during login with no specifics on why the punishment had occurred. Consequently, players often got angry and engaged in worse behavior once they returned to the game. League of Legends reform card. As a response, Lin implemented “reformation cards” to tell players exactly what they had said or done to earn their suspension and included evidence of the player engaging in that behavior. This time, if a player got angry and posted complaints about their reformation card on the community forum, other members of the community would reinforce the card with comments like, “You deserve every ban you got with language like that.” The team saw a 70% increase in their success with avoiding repeat offenses from suspended users.

However, the key thing, as Benesch notes, is getting past the idea that the only responses to speech a large majority of people think is "bad" is to take it down and/or punish the individual who made it:

“There is often the assumption in public discourse and in government policymaking and so forth that there are only two things you can do to respond to harmful speech online,” says Benesch. “One of those is to censor the speech, and the other is to punish the person who has said or distributed it.” Instead, she says, we could be persuading people not to post the content in the first place, rank it lower in a feed, or even convince people to take it down and apologize for it themselves.

Obviously, there are limits on all of these options -- and anything can and will be abused over time. But by at least thinking through a wider range of possibilities than "censor" or "leave everything exactly as is" we can hopefully get to a better overall solution for many internet discussion platforms.

Meanwhile, Josh Constine, at TechCrunch recently had some good suggestions as well specifically for Twitter and Facebook for ways that they can encourage more civility, without resorting to censorship. Here's one example:

Practically, Twitter needs to change how replies work, as they are the primary vector of abuse. Abusers can @ reply you and show up in your notifications, even if you don’t follow them. If you block or mute them, they can create a new throwaway account and continue the abuse. If you block all notifications from people you don’t follow, you sever your connection to considerate discussion with strangers or potential friends — what was supposed to be a core value-add of these services. A powerful way to prevent this @ reply abuse would be to prevent accounts that aren’t completely registered with a valid phone number, haven’t demonstrated enough rule-abiding behavior or have been reported for policy violations from having their replies appear in recipients’ notifications. This would at least make it harder for harassers to continue their abuse, and to create new throwaway accounts that circumvent previous blocks and bans in order to spread hatred.

There may be concerns with that as well, but it's encouraging that more people are thinking about ways that design decision can make things better, rather than resorting to just out and out censorship.

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: abuse, comments, design, free speech, harassment

Companies: facebook, medium, twitter