Tech companies have been slow to respond to the way their platforms have been used to amplify hate. Anonymity on social media platforms often makes it difficult to identify the right-wing radicalization that is happening to some Americans online, exposing users to violent and often racist disinformation.

We need new business practices and policies that address public harm propagated in media-technology platforms, particularly as bad actors use these platforms to enact violence on others. Important developments are under way in terms of commercial content moderation, which allow humans to flag threats and other forms of dangerous content, but have yet to reach the level of impact needed, given the volume of media that traffics through these platforms. Algorithms and automated decision-making technologies are not yet sophisticated enough to recognize certain types of online threats before mass violence occurs.

We know that anti-government, anti-Black, anti-Muslim, anti-gay and anti-immigrant hate crime is a massive presence, and it’s important to note how white nationalist trolls attempted to take credit for Nikolas Cruz in the immediate aftermath of the mass shooting he carried out at Marjory Stoneman Douglas High School in Parkland, Florida, on Feb. 14, as part of their desire to amplify and enact hate.

It’s easy to see why they would use this opportunity to manipulate the media: In 2009, the United States Department of Homeland Security reported that right-wing extremists used the election of the first African-American President, Barack Obama, to increase recruitment. Racist propaganda and disinformation litters our online media landscape, and, sometimes, incites action. We know people use online platforms to find information and make sense of their social experiences of marginalization, and members of a sector of society feel their identities as white Americans are threatened or under attack, despite evidence to the contrary. This was the case for Dylann Roof, whom we know to have murdered nine African Americans at the Emanuel African Methodist Episcopal Church in Charleston, S.C., in 2015. Roof said he developed the views that fueled his mass shooting online, where he searched for information about violence perpetrated by black Americans on whites.

It’s time we think about making our online platforms more transparent, rather than expecting consumers of social media and search engines to easily distinguish propaganda from fact. Rather than thinking of these media-tech companies as neutral, objective news and information companies, we could think of them for what they are – advertising engines to help companies and organizations know more about us and better target their products and services. We need greater transparency in how these systems work, more collaborations between the tech sector and their critics, and public policy to protect us all from harm.

Sign up for Inside TIME. Be the first to see the new cover of TIME and get our most compelling stories delivered straight to your inbox. Please enter a valid email address. Sign Up Now Check the box if you do not wish to receive promotional offers via email from TIME. You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Thank you! For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.

Contact us at letters@time.com.