Open up Instagram these days and you might be bombarded with calls to "Stay Home."

On YouTube, you may see a link to a government website about the coronavirus.

Or go to Twitter and try to find the phrase "social distancing is not effective." It might be there, but probably not for long — because Twitter has banned the phrase as harmful.

A few years ago, these kinds of warnings and filters would have been hard to imagine. Most major consumer technology platforms embraced the idea that they were neutral players, leaving the flow of information up to users.

Now, facing the prospect that hoaxes or misinformation could worsen a global pandemic, tech platforms are taking control of the information ecosystem like never before. It's a shift that may finally dispose of the idea that Big Tech provides a "neutral platform" where the most-liked idea wins, even if it's a conspiracy theory.

"What you're seeing is the platforms' being forced into a public health stand more than they've ever been before," said Ethan Zuckerman, director of the Center for Civic Media at the Massachusetts Institute of Technology.

Full coverage of the coronavirus outbreak

"It seems like the platforms have decided to take a clear stand, where they see COVID-19 as a significant enough public health problem that they're comfortable putting their thumb on the scale even if it runs the risk of some of their users claiming it's an unfair restriction on free speech," Zuckerman said.

From the start, major consumer internet companies had some rules. Most platforms didn't allow pornography or gore. As militants in the Middle East moved online, many companies — most notably Google and Facebook — worked to identify and remove content that tried to spread propaganda or recruit people. Still, those efforts weren't fully successful. YouTube, for example, wrongfully removed video documentation of human rights abuses as part of its campaign to delete terrorist-related content.

Beyond those narrow exceptions, tech companies resisted calls to influence what their users saw.

"We are a tech company, not a media company," Facebook CEO Mark Zuckerberg said in 2016, insisting he wanted to build tools, not moderate content. In 2018, he said Facebook shouldn't remove posts that deny the existence of the Holocaust so users have room to make unintentional mistakes.

That began to change in recent years, as academics, politicians, civil rights groups and even former employees scrutinized companies and pushed for change amid reports of lackluster content moderation. While tech companies hadn't been making specific editorial decisions, the systems that determined what people saw — often based on complex algorithms that tried to maximize engagement — became the focus of intense criticism.

Tech companies reacted. Major platforms stepped up enforcement around hate speech and abuse. Many changed how their systems worked. YouTube pledged last year to no longer recommend conspiracy videos, while Twitter added a feature for users to follow certain topics selected by the company. Amazon removed more than a dozen books that unscientifically claimed that a homemade bleach, chlorine dioxide, could cure conditions from malaria to childhood autism.

Facebook now has an elaborate rulebook on what stays up and what comes down, the result of countless internal meetings and feedback from lawmakers, interest groups and users. It has also been working on creating a body, independent at least in theory, that would rule on content removal questions almost like a supreme court.

Byers Market Newsletter Get breaking news and insider analysis on the rapidly changing world of media and technology right to your inbox. This site is protected by recaptcha

A new willingness to moderate their platforms has culminated in an industrywide effort to crack down on misinformation and push people to authoritative information at a particularly crucial time.

"Neutrality — there's no such thing as that, because taking a neutral stance on an issue of public health consequence isn't neutral," said Whitney Phillips, a professor of communication at Syracuse University who researches online harassment and disinformation.

"Choosing to be neutral is a position," she said. "It's to say, ‘I am not getting involved because I do not believe it is worth getting involved.' It is internally inconsistent. It is illogical. It doesn't work as an idea.

"So these tech platforms can claim neutrality all they want, but they have never been neutral from the very outset," she added.