Mark Zuckerberg’s apology tour for Facebook’s year of misdeeds continued this weekend, when the Facebook C.E.O. and founder took to his own platform to atone. “Tonight concludes Yom Kippur, the holiest day of the year for Jews, when we reflect on the past year and ask forgiveness for our mistakes,” he wrote. “For those I hurt this year, I ask forgiveness and I will try to be better. For the ways my work was used to divide people rather than bring us together, I ask forgiveness and I will work to do better.” It was a completely different tone than Zuckerberg had taken as recently as 11 months ago, when he dismissed allegations that the prevalence of misinformation on Facebook had been used to sway the 2016 presidential election, calling them a “pretty crazy idea,” and brushed off Barack Obama’s privately voiced concerns. And it echoed his statements in a rare Facebook Live video two weeks prior, in which he addressed the purchase of 3,000 Facebook ads by pro-Kremlin forces. (The company also recently announced that it plans to hire 1,000 more employees to augment its global-ads review team.)

Facebook was still dealing with the fallout from its Russia P.R. crisis when it was forced to face another challenge. In the immediate aftermath of a shooting at a music festival in Las Vegas that left more than 50 dead and hundreds injured late Sunday night, fake news and conspiracy theories circulated quickly on the site. On its “safety check” page, which lets users in the vicinity of a natural disaster or safety incident check in to let their friends and family know they’re O.K., Facebook prominently displayed four news stories, including one from a self-proclaimed far-right Web site, which baselessly suggested that the shooting was “more like the kind of target a Left-wing nutjob would choose than a Right-wing nutjob.” Thanks to Facebook’s algorithm, a report from Gateway Pundit made its way onto the site’s crisis-response page ahead of legitimate news sources like NBC. (Interestingly, Facebook’s trending-news bar—which came under fire last year for suppressing conservative views—showed news from more legitimate sources).

In a statement, Facebook framed the issue as a momentary mishap. “Our Global Security Operations Center spotted these posts this morning and we have removed them,” the statement read. “However, their removal was delayed, allowing them to be screen captured and circulated online. We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.”

The controversy comes amid growing criticism that the company’s platform is being manipulated by foreign actors and trolls, both to spread fake news and to exploit political divisions among Facebook users. Yet even as it becomes clear that those accusations hold water, Facebook is resisting being labeled as a media company, preferring to leave news judgement in the hands of algorithms rather than people, who could introduce their own bias.

Each new development in Facebook’s evolving battle with fake news points to a sort of identity crisis: is it a tech company that has been hugely influential in political campaigns? Or is it a platform not intended to influence those campaigns one way or another? Regardless of what it decides is the answer, the company ultimately bears the responsibility to its users of displaying factually accurate breaking-news stories in times of crisis. For all its growth as a tech giant, Facebook—along with Google and Twitter, both of which struggled to tamp down on fake shooting-related news Monday—is still grappling with the issue of whether it’s grown too large to scale its checks-and-balances systems.