Facebook’s “safety check” began appearing for users in Bangkok around 11am on Dec. 27. “The Explosion in Bangkok, Thailand,” read the alert, inviting people who may have been in the area to share a “safety status” with their network if they were safe or unaffected, much as they would any other update. Alongside the status window, and below a map highlighting Thailand’s capital, Facebook offered a few more details:

What Happened: Explosion

When: Dec. 27, 2016

Affected Cities: Bangkok, Thailand

Confirmation Source: Media Sources

Facebook also provided a link to “See more information.” Users who clicked it were taken to a handful of news articles, including one from a website called Bangkok Informer, which had recently published a story about a deadly bombing that occurred at Bangkok’s Erawan Shrine.

In fact, no bombs had gone off at Erawan Shrine on Dec. 27. The Bangkok Informer article, which is no longer available online, described an explosion that took place there in August 2015. Facebook later said safety check kicked in for a completely unrelated reason. The feature, which is designed to help users communicate better during all sorts of crises, was triggered by reports of a man lobbing explosives into the Government House compound from a nearby rooftop, later confirmed by multiple local news outlets. By 10pm local time, Facebook’s notification was gone.

The entire episode played out in less than 12 hours, a short time in real life but an eternity on the internet when an apparent crisis is in motion. Halfway around the world, another story was gaining momentum on US tech blogs, based on a handful of accounts from Facebook users who were shown mostly outdated stories on the Erawan Shrine bombing when they followed Facebook’s link for more information, rather than reports of an incident that day at the Government House. Facebook, the headlines read, had been duped by “fake news” again.

Facebook has been widely criticized for its role in spreading propaganda, hyper-partisan content, and demonstrably false stories ahead of the US presidential election. While Facebook CEO Mark Zuckerberg initially shrugged off the idea that such ”fake news” could have influenced the election, he later vowed to “take misinformation seriously.” Adam Mosseri, Facebook’s VP of news feed, recently outlined steps the company is taking to eliminate the most egregious hoaxes from its site.

With that backdrop, it was easy to believe the company had erred again in deploying safety check in Bangkok on Dec. 27. The truth of the situation—that one misleading story had become tangled up with still-developing reports of separate explosions in the city—is more complicated. But for a company with 1.8 billion monthly active users and the power to alert them to crises often before even local media outlets have caught on, there are bound to be ancillary and sometimes negative consequences.

Facebook introduced safety check in October 2014 in a clear attempt at corporate do-goodery. “Over the last few years there have been many disasters and crises where people have turned to the Internet for help,” Zuckerberg wrote at the time. “Connecting with people is always valuable, but these are the moments when it matters most.”

Originally, Facebook employed a team of people who decided when safety check should be turned on. Then, in late 2015, the company backed away from that strategy after being accused of western bias for activating safety check in response to the Paris attacks, but not following a similar bombing in Beirut. This November, Facebook announced it had handed over more control of safety check and “community help,” another crisis response feature, to its users.

The updated safety check, Wired reported that month, “begins with an algorithm that monitors an emergency newswire—a third-party program that aggregates information directly from police departments, weather services, and the like.” The program can detect events long before they are reported in the media. Facebook then combs its platform to see whether people in the area are discussing the possible incident. If enough are, Facebook prompts them to check in as safe.

This hands-off approach is based on a critical assumption: News of real emergencies will spread organically, and false alarms will fizzle out. When that assumption holds true, the results can be arresting. On the night of the mass shooting in Orlando, Florida, this summer, Facebook’s algorithms activated safety check 11 minutes before police officially announced the attack. When it fails—as happened in Bangkok—or something else goes awry, safety check can provoke confusion, anxiety, and alarm much like any other breaking news alert.

Zuckerberg has pushed back on the idea that Facebook is a media company that should adhere to editorial standards, preferring to see it as a neutral space for “public discourse.” Safety check, a feature that can function simultaneously as a public good and a breaking news service, is one area where the line between media company and technology platform is particularly thin. Facebook’s power is unprecedented in its sheer size and influence over what information people see. All it takes is one false story to start a bomb scare.