Last week, I got to chat with Epic CEO Tim Sweeney, one of the virtual reality gaming industry’s most prominent figures. Like many others, Sweeney believes that VR has the potential to transform how we interact online, especially as more sophisticated tracking systems translate body language, facial expressions, and other details into digital worlds. More specifically, he thinks virtual reality could make us treat each other better there. Unfortunately, this is almost certainly wrong — and if we wait for it to happen, I fear we’ll ruin social VR in the process.

During the interview, I asked Sweeney about how social VR would deal with the toxicity that multiplayer games and social networks already have had to address. “Both multiplayer games and online forums have this property of virtual anonymity. Other people can’t really see you, they don’t really know who you are. And so the sort of social moderating mechanisms in real life, and your desire not to offend people around you, don’t really adjust,” Sweeney told me. “Once your VR avatar really looks like you, and people can see you, and you can see them and their faces and emotions, I think all of the normal restraining mechanisms will kick in. If you insult somebody and you see that they have a sad look on their face, then you’re going to feel really, really bad about that. And you’re probably not going to do it again.”

Anyone who’s been bullied or catcalled knows that face-to-face decency has limits

At first glance, this sounds theoretically possible: if people are more civil in face-to-face conversation, maybe that means we need more virtual faces on the internet. Anecdotally, virtual reality developers have shifted away from photorealism because some players have found killing real-seeming people in VR disturbing. But there’s a gulf between a willingness to kill and a willingness to say nasty things, and even if a minority of people treat each other badly, that can ruin things for everyone else.

As anyone who’s been bullied, catcalled, or otherwise harassed in real life can attest, social restraining mechanisms don’t create a blanket aversion to “offending people.” They make everyone worry about offending people they see as part of their in-group — or people they could face punishment for bothering.

There are good reasons to make online communication more expressive. Being able to read someone’s emotions better is great if you’re already invested in having a decent conversation. It could make it easier to detect sarcasm, convey intimacy, or tell whether you’ve accidentally caused distress. It’s a worthy and interesting goal, and one that could transform how we interact online.

But saying that emotions will make you care about a person is totally backward. Online griefers, for instance, love seeing firsthand evidence that they’ve hurt someone. The platitude “don’t feed the trolls” has limits, but it accurately captures an important harassment dynamic: the more visible and agitated someone’s response, the more “exploitable” they are for future attacks. Outside virtual reality, games like Hearthstone and Splatoon were praised for removing — not expanding — the ability to communicate with other players. And the argument that VR has a unique empathy-generating power has its own set of problems.

If griefers love knowing that they’ve hurt someone, will more emotions really help?

The internet isn’t hostile because people there don’t look real enough — it’s not even clear that making internet users tie their actions to real-life identities helps that much, except insofar as it helps prosecutors literally put offenders in jail. Among other reasons, the internet is hostile because it puts a lot of disparate groups within arm’s reach of each other, and it’s extremely easy to deliver abuse with said proximity. The odds of suffering any external social consequences are vanishingly low, and they’re often offset by your own digital tribe’s approval — see, for example, the rise of “professional victimizers” and their legions of fans. Even if you assume that the majority of users are kind (or indifferent) to everyone they meet online, communications technology can vastly amplify a few bad voices. And the attempts to correct them through old-fashioned social shaming often backfire tremendously, simply creating a new cycle of abuse.

Harassment has already proved to be a problem in VR social networks and multiplayer experiences: one of the most famous incidents of 2016 involved an anonymous player grabbing his female partner’s virtual chest in an archery game. Saying that things will get better once we just have the right combination of sensors only inspires complacency. Why bother fixing something in the short term when you could chase a utopian dream instead?

The internet isn’t hostile because people don’t look real enough

Fortunately, this isn’t the approach I’ve seen most developers take. When QuiVR’s creators found out about the incident above, they instituted a personal space bubble that prevented unwanted touching, as well as a gesture that could totally erase another player from view. VR social network AltspaceVR introduced a similar bubble after people reported harassment on the platform. These technological solutions aren’t a silver bullet; you also need strong social norms and moderation. But they’re a kind of infrastructure that empowers good citizens and makes trolls’ lives harder.

The sheer scale of the digital world can make it feel more dangerous than the “real” one, both inside and outside VR. But the internet can also provide spaces to engage with people you would never meet offline, safe from threats of physical force or economic pressure. The best communities have thrived by giving users control over what they can share, letting them choose who to interact with, and consistently kicking out people who break the rules — not by waiting for some final, perfect method of communication.