Welcome to Mossberg, a weekly commentary and reviews column on The Verge and Recode by veteran tech journalist Walt Mossberg, now an Executive Editor at The Verge and Editor at Large of Recode.

Totally false news isn’t a new thing in the United States. In our fourth presidential election, in 1800, two of our most brilliant founders — John Adams and Thomas Jefferson — faced off in a vicious campaign that involved newspaper editors on the take, and numerous false, often personal attacks. Some historians even claim that partisans for Adams spread the rumor that Jefferson was dead. (He won anyway.)

But they didn’t have Facebook to present, amplify, and repeat those falsehoods instantly to millions of people. And that’s why the fake news problem is so serious, even outside the context of a presidential election.

Back in May, the Pew Research Center found that roughly 44 percent of the US adult population got at least some of its news from Facebook. And that was before the general election. There’s nothing inherently wrong with this. Many if not most news organizations, old and new, big and small (including this one), post stories and videos on the social network. And readers and viewers are moved to share stories, whether publishers have embraced the platform or not.

But that puts a heavy responsibility on Facebook to make sure it’s not helping to spread outright lies masquerading as news or publishing the output of made-up news organizations. Yet that’s exactly what happened during the 2016 presidential campaign. In the best-known example, BuzzFeed discovered that over 100 mostly pro-Trump fake news sites in a single town in Macedonia were pumping out false “news” on Facebook in an effort to make money from ads.

Since then, Facebook CEO Mark Zuckerberg has posted two long statements on the social network. On November 12th, while he said, “We don't want any hoaxes on Facebook,” he also said it was “extremely unlikely hoaxes changed the outcome of this election.” But that was a weaselly excuse. Facebook has done controversial experiments to investigate whether the News Feed can affect emotions — surely fake news can affect beliefs as well.

A week later, in the second post, he got more detailed and outlined a series of steps the company was working on. These included better detection of fake news, a better reporting system for users to report fake news, and possibly flagging fake news with warning labels.

Facebook has done controversial experiments to investigate whether the news feed can affect emotions — surely fake news can affect beliefs as well

(Oddly, both posts briefly disappeared Tuesday. Shortly after The Verge reported that they were gone, they returned and the company said it was due to a system error.)

In both posts, Zuckerberg stressed the difficulty of deciding what was true or false, what was legitimate opinion or fact, and the need to balance dealing with fake news with protecting freedom of speech.

I agree that these considerations, and others, make this a delicate problem to solve. I especially agree that free speech and the right to opinions, on politics and everything else, must be protected — whether they are popular or not — as long as they aren’t hate speech.

But I am also convinced that Facebook has the financial, technical, and human resources to ferret out and totally block almost all fake news and hate speech, both of which it says it wants gone from its service. It’s a company that earned nearly $3 billion just last quarter, and which is reportedly building a tool capable of preventing controversial content from appearing in its News Feed in countries like China.

Yet the Zuckerberg posts suggest that, while the company is working to better detect fake news, it’s still hoping to rely on the all-too-common Silicon Valley belief that the wisdom of the crowd, plus third-party input, will save the day.

“We do not want to be arbiters of truth ourselves,” Zuckerberg says, “but instead rely on our community and trusted third parties.” Thus, among the ideas he lists for banishing fake news are those labels, that easier user reporting of fake news, and making fake news economically less enticing for its creators. (The company did bar known fake news sites from its ad networks, as did Google.)

Facebook has the financial, technical, and human resources to ferret out and totally block almost all fake news and hate speech

But Facebook isn’t just a technology platform where news happens to be published, along with baby pictures, vacation bragging and amateur sports commentary. It’s clearly a media company. It is now publishing articles and videos directly from a host of news organizations, including The Verge. Including this very column. These are encoded in a special way to work best on Facebook, and there are business terms behind the practice. Increasingly, people read news on Facebook and never even visit the originating site or publication.

Hell, even those Macedonian teens understood that Facebook was a media company. They made up fake media organization names from which to post. (Really, Facebook, you weren’t even a little suspicious about DonaldTrumpNews.co?)

So, yes, in my view, Facebook has a direct responsibility to get rid of fake news, and it cannot simply rely on its audience or others to shoulder the burden. I’m happy to see tools made available to readers that help report such trash, and happy that Facebook is working with third-party fact checkers. But the ultimate responsibility is Facebook’s.

Nobody wants Facebook to tinker with legitimate news and opinion — again, except for hate speech. But getting rid of purely fake news from purely fake sources is an eminently achievable task, especially for a well-funded, tech-savvy, huge media company serving nearly 2 billion people.

Here are a few guidelines, Facebook.

Assertions by actual people, even if they are false, aren’t fake news. People say and believe all kinds of things. So, even if they don’t believe in the moon landings, and form a Facebook group of like-minded others, that’s not fake news. Opinions aren’t fake news. The existence of the new MacBook Pro is an indisputable fact, as are its specs, design, and price. Yet some might love it and others might hate it. The same goes for Donald Trump’s promise to build a border wall and for the Gilmore Girls revival. But neither the lovers nor the haters are creating fake news. Differing interpretations aren’t fake news. Millions may have seen the video of a football play, from multiple angles. So there’s a real, actual fact there. I might think there was pass interference, and somebody else might not, but even if the replay shows there was a proper penalty, the other guy has a right to stick to his guns. Sensational “news stories” with little or no reporting that seem opportune, and aren’t quickly replicated or even repeated with credit by reputable news sources, are probably fake news. You have the means to investigate this. It might be a new, legit, one-person blog that stumbled onto a great scoop, but it’s likelier to be a cash-driven Macedonian teenage fake news poster. How could you not have questioned one of the Macedonian “stories” that had the Pope endorsing a US presidential candidate? Did you think the Vatican Radio Facebook page would miss that? Sketchy personal accounts that give every appearance of falsehood and spread fake news (see number 4) were probably established for that very purpose. You know who they are. Humans can often spot them, even if algorithms can’t. You can bar them.

I’m encouraged that the November 19th Zuckerberg post says the company wants to “detect what people will flag as false before they they do it themselves.” But, again, I think Facebook needs to step up and take direct responsibility for expunging fake news, not just label it or give it less weight in the news feed.

Facebook might even consider hiring a distinguished, nonpartisan editor and a small staff to help in the effort. The company abandoned such human input in its little-known Trending box after conservatives complained that right-leaning stories were being culled out. But if weeding out verifiably fake news — conservative or liberal or whatever — angers some users, that’s the price of being a news platform, even if it slightly affects growth. It’s the right trade-off.

All of this would mean Facebook would have to act like the media company it has become and stop pretending.

The time for pretending is over.