When an explosion rocked the Port Authority Bus Terminal subway station early this morning, algorithms started to detect chatter across the web.

On Google, the results of those algorithms took the form of “Top stories” — excerpts from search results for newsy topics that are visually promoted by the search giant — that, for a brief period, included results from the disreputable British tabloid The Sun and the often inaccurate conservative news outlet The Daily Caller beneath a search for “NYC attack.”

Just under those results were results from Twitter that came even further from the fringe, including an Infowars editor who claimed the attack was “ISIS-inspired.” (While this may eventually prove true, at the time of the tweet and at press time, the NYPD had not yet announced whether the shooter had been linked to ISIS. That unconfirmed assertion may have originated with former NYPD police commissioner Bill Bratton, who was not involved in the investigation, speaking on MSNBC.)

This isn’t the first time Google’s algorithm has spread iffy information in the wake of a violent attack. After the Las Vegas Strip shooting, it returned “Top stories” from 4chan that blamed the wrong person for the slaughter. At the time, Google said in its defense that the results had been generated algorithmically, and that it would make improvements to prevent similar results from appearing in the future.

None of this has been confirmed, but here it is on Google's top results pic.twitter.com/cveBM5QuqC — alfred 🆖 (@alfredwkng) December 11, 2017

For most of Google’s existence, it has served as a reference tool directing people to websites where they might find the information they seek. But in the last five years, Google has begun highlighting information in various ways in order to provide direct answers to users’ questions — something people increasingly want for searches they do on their phones or through voice assistants. This trend has resulted in repeated embarrassment for Google, as its apparently authoritative answers have at times affirmed that the Earth is flat, women are evil, and four U.S. presidents had been members of the Ku Klux Klan (none of which are demonstrably true). It also once answered the query “is obama planning a coup” with information from a conspiracy site claiming that Obama was planning to seize power after his term came to an end.

It’s easy to laugh at those outrageous claims, but in the hazy minutes and hours after a tragedy, many people turn to information portals like Google in the hope that they’ll find an objective selection of the most recent updates. In reality, the results are populated by poorly-understood algorithms that often seem to perform worst when news is breaking.

And then, without a trace, the top results will change as though the search giant had never made the claim.

“At the moment, there’s no record at all,” said Robert Epstein, a researcher who studies how search results can affect political outcomes, of the transient nature of search results. “Most of what people see on their screen is ephemeral. That is potentially extremely dangerous.”

In this case, though, at least one Google representative appears to have taken notice. Danny Sullivan, a former journalist who watched Google closely at his site SearchEngineLand until he was hired in October as the company’s public liaison for search, responded to a tweet pointing to the shaky info by promising the company will investigate.

“Passing out along to our teams to look,” Sullivan said.

Update: Google sent a statement. “The Top Stories section in our search results worked as intended — responding to users’ queries with a constantly updated view of breaking stories citing credible sources. On the Twitter carousel that appeared in search results, we had an issue with a tweet that looked to fuel speculation. We take this seriously and are working on making the feature represent more authoritative sources.”