When Justice Department special counsel Robert Mueller announced criminal charges against Russian operatives for interfering with the 2016 presidential election, descriptions of how the Russians used modern communications technologies were all too familiar. Journalists referred to the ways in which Russia “manipulated social-media platforms,” and tech company executives like Facebook’s Rob Goldman decried “how the Russians abused our system.”

WIRED OPINION ABOUT Joshua Geltzer is executive director and visiting professor of law at Georgetown Law’s Institute for Constitutional Advocacy and Protection as well as an ASU Future of War fellow at New America writing a book on the issues discussed here. From 2015 to 2017 he served as senior director for counterterrorism at the National Security Council.

This is standard fare. When Russia manipulates elections via Facebook, or ISIS recruits followers on Twitter, or racist landlords deny rentals to blacks and then offer them to whites through Airbnb, commentators and companies describe these activities as “manipulation” or “abuse” of today’s ubiquitous websites and apps. The impulse is to portray this odious behavior as a strange, unpredictable, and peripheral contortion of the platforms.

But it’s not. It’s simply using those platforms as designed.

Twitter’s mission statement speaks of sharing ideas and demolishing barriers: “To give everyone the power to create and share ideas and information instantly, without barriers.”

It’s no surprise, then, that ISIS was drawn to Twitter’s ability to share news about demolishing a different type of barrier. When the terrorist group startled the world in 2014 by sweeping through much of Syria and then pushing into Iraq, its key moment occurred on Twitter, as ISIS tweeted photographs of a bulldozer demolishing the earthen barrier that had long marked the border between Syria and Iraq.

Twitter later said that ISIS’s use “is not permitted on our service,” and that may be true as a matter of policy—but not as a matter of functionality. As ISIS used Twitter to break down barriers and share its own horrific ideas instantly and anonymously, ISIS wasn’t manipulating how Twitter works. It was using it precisely as designed: to share ideas rapidly and globally.

“Belong anywhere” is Airbnb’s motto. But it turns out there are some who don’t think that just anyone deserves to belong anywhere. A 2016 study revealed that would-be renters with white-sounding names booked successfully on Airbnb 50 percent of the time, compared to 42 percent for would-be renters with black-sounding names.

In response, Airbnb commissioned a report that concluded that “fighting discrimination is fundamental to the company’s mission.” But what’s actually fundamental to the company’s mission is fighting virtually any form of regulation. That’s what maximizes Airbnb’s profits; it’s also what gives the platform essentially a free pass from decades of legal and regulatory infrastructure carefully crafted to fight housing discrimination.

For racist landlords to have unfettered discretion to pick and choose renters based on any criteria whatsoever—even skin color as it appears in profile photos—isn’t an exploitation of Airbnb’s features. It’s just use of those particular features—which Airbnb has subsequently altered in some ways but generally has chosen to maintain.

And that brings us back to what Mueller’s charges reveal about how Russia used Facebook, among other platforms, to interfere with the 2016 election and sow discord among Americans. As Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism, told the New York Times, “Facebook built incredibly effective tools which let Russia profile citizens here in the U.S. and figure out how to manipulate us. Facebook, essentially, gave them everything they needed.”

For example, the type of polarizing ads that Facebook admits Russia’s Internet Research Agency purchased get rewarded by Facebook’s undisclosed algorithm for provoking user engagement. And Facebook aggressively markets the microtargeting that Russia utilized to pit Americans against each other on divisive social and political issues. Russia didn’t abuse Facebook—it simply used Facebook.

Recognizing that these challenges—and others—emerging on modern communications platforms stem from their inherent features isn’t an indictment of the companies whose services we’ve all come to rely on; to the contrary, it shows just how hard these problems are. And it calls for a reorientation as to how the companies and the rest of us think about addressing these challenges.

First, if companies would show the world how their algorithms operate—and, moreover, how malicious actors are using their platforms—that enhanced transparency could yield crowd-sourced solutions rather than leaving remedies to a tiny set of engineers, lawyers, and policy officials employed by the companies themselves.

Second, tech companies should at least experiment with bolder approaches to restricting malicious actors’ access to their services. So far, companies’ policies proscribe use by ISIS and certain other malicious actors, but in practice everyone can use the companies’ services unless and until another user complains about certain behavior and the company investigates and validates the complaint.