“The interwebs can be a very savage place.”

This assessment came from a participant in my study on online harassment—a young woman of color living in a low-income neighborhood in New York City. Her tone was only half-ironic. While she faces plenty of challenges in her daily life, the digital world worries her more. She’s scared of being harassed for what she posts online, having personal photos hacked and distributed without her consent, or getting “doxed”—slang for posting someone’s address, phone number, and sensitive personal information without their permission. For this bright, motivated young woman, the internet is a frightening, dog-eat-dog world. It’s often safer to keep your opinions to yourself than risk retaliation.

Sadly, she’s not the only one who feels this way. A new report released by the Data & Society Research Institute and the Center for Innovative Public Health Research found that 47% of Americans have experienced online harassment and abuse, from nasty name-calling to stalking and privacy violations. This is concerning on its own, but the data also shows worrisome demographic divisions.

Many people once believed the internet would be a great equalizer, where diverse people could meet, learn, and contribute to national conversations. But this survey shows what many technology critics suspect: Not everyone has an equal voice online. This is at least partially because people who are marginalized in other areas of their lives are being systematically dissuaded from participating.

Men and women are equally likely to face harassment online, but women experience a wider variety of online abuse, including more serious violations. Young people also experience such behavior far more than older adults. Thus, young women have it the worst; they’re much more susceptible to doxing, sexual harassment, cyberstalking, and in-person attacks than men or older women. Lesbian, gay, and bisexual people are also more likely to experience harassment.

Researchers consistently find that people self-censor online to avoid retaliation. This could be positive: For instance, people might be less likely to use a racial slur online if they think they’ll be condemned for it. But given the differences in people’s experience of harassment, this survey suggests that young people, especially young women and LGB people, are less likely to make online contributions at all because they’re worried about being attacked for it.

Meanwhile, even when men and women have the same negative experiences, men are less likely to label them as harassment. This suggests that men—especially white, straight men—feel less vulnerable online, which is why they’re less likely to self-censor.

Social media like Twitter and Facebook are certainly flawed, but they function as hosts for public conversations on a huge variety of social issues. If women, people of color, and LGB internet users are shying away from contributing because of well-founded fears of retaliation, their voices will be missing from this important civic sphere.

Social media doesn’t necessarily have to operate this way. Given the significant gender and racial disparities in the tech industry, the people who build social technologies are primarily white and Asian men. Creators usually build tools for themselves and their peers. Because white, male technologists don’t feel vulnerable to harassment in the same way that, say, women of color do, they don’t design social media to protect against online abuse. While sites like Twitter and Reddit have, belatedly, implemented various mechanisms to fight abuse (some more successful than others), their creators have historically prioritized an uncomplicated idea of free speech over any content regulation, even when certain voices are systemically excluded from participation as a result.

In a divisive time for American society, it’s crucial that everyone is heard. Social media companies need to take a stand and ensure that destructive online behavior doesn’t turn people away from sharing their voices.