Google has once again been called out for algorithmically encouraging the spread of dubious, politically charged speculation and misinformation around a topical news event.

In the latest instance of the algorithmic amplification of misinformation, the news event in question is a shooting in a Texas church on Sunday. Authorities have identified 26-year-old Devin Patrick Kelley as the perpetrator.

Users of Google’s search engine who conduct internet searches for queries such as “who is Devin Patrick Kelley?” — or just do a simple search for his name — can be exposed to tweets claiming the shooter was a Muslim convert; or a member of Antifa; or a Democrat supporter…

Google's 'Popular On Twitter' news feature is a misinformation gutter. Search for Devin Patrick Kelley just now surfaced these four items. pic.twitter.com/06rcPOgx5b — Justin Hendrix (@justinhendrix) November 6, 2017

The core issue is that Google is prominently placing unverified claims high up in its hierarchy of relevance-ranked data — aka “Google search results” — which the company has itself previously likened to a library index of truthful data. (Albeit it’s demonstrably a pretty skew-able index when your passing “oracle of truth” can be Julian Assange’s Twitter feed… ).

The section where this content is being embedded within Google’s search results is powered by its access to Twitter’s firehose of tweets, combined, it says, with its own ranking algorithms — which apparently also favor the kind of wild, clickbait-y and unverified claims that have been shown to spread like wildfire on Facebook (aka fake news).

The dynamic handful of tweets that Google’s algorithms choose to showcase within search results are sometimes labeled “Popular on Twitter” (or else just “on Twitter”).

They do appear below a “Top Stories” section, which sits at the top of results. But the Twitter content is still very prominently displayed near the top of Google search results — meaning internet searchers looking for genuine information around a developing news story may well be being unwittingly exposed to entirely unverified claims, including maliciously motivated, politically charged misinformation.

(On that wider topic, Google, Twitter and Facebook have all been giving evidence to Congress this month about how their platforms were — and still are being — manipulated as part of Russian political disinformation campaigns targeting U.S. voters.)

Asked about the Texas-related misinformation it’s algorithmically surfacing now, a spokesperson for Google provided us with the following statement: “The search results appearing from Twitter, which surface based on our ranking algorithms, are changing second by second and represent a dynamic conversation that is going on in near real-time.

“For the queries in question, they are not the first results we show on the page. Instead, they appear after news sources, including our Top Stories carousel which we have been constantly updating. We’ll continue to look at ways to improve how we rank tweets that appear in search.”

At the time of writing Twitter had not responded to a request for comment.

It’s not clear to what extent Twitter’s platform is feeding Google’s ranking algorithms at this point, i.e. via the dynamics at play on its own platform that can amplify certain tweets — such as bot networks working together to retweet particular politically charged content to try to get it trending. But it seems likely that the workings and dynamics of both platforms are at least partially involved in surfacing this content.

We confirmed that various tweets are being surfaced via Google searches by conducting some of our own searches on the topic. We were shown various additional/different tweets — including suggesting the shooter was an atheist; or Antifa; as well as tweets from some established news outlets…

[gallery ids="1563935,1563934,1563933,1563972"]

Safe to say, it’s a bundled mix of claims from all sorts of sources that requires the person exposed to the content to have the critical faculties to sift “potentially accurate” from “at best highly speculative” or even “out-and-out nonsense.”

A month ago we reported on a similar issue with Google search results following another U.S. mass shooting when Google search results were distributing unverified claims (from 4chan) directly within its Top Stories section, i.e. not just via the “on Twitter” segment of search results, which was arguably even worse.

Though it’s still a pretty fine line to expect the average internet user to be able to critically and dynamically sort a random selection of tweets about a topical news event that are actively being lifted high into their field of view — alongside other types of content which Google also implies answer the core search query.

Safe to say, the algorithmic architecture that underpins so much of the content internet users are exposed to via tech giants’ mega platforms continues to enable lies to run far faster than truth online by favoring flaming nonsense (and/or flagrant calumny) over more robustly sourced information.

Even as the content that internet users are being exposed to has become a very blurry blend, as increasingly dominant tech platforms algorithmically mix information that might correctly answer a query/need alongside viral/provocative claims intended to incentivize clicks/engagement — and thus generate more revenue for the underlying tech platform.

Such automated and mixed motivations are writ very large indeed in our modern digital “age of misinformation.”

Update: Google’s public liaison for search, Danny Sullivan, who only recently joined the company, has taken to Twitter (of all places) to add a little more commentary on this topic — tweeting that “We do need to fix it. We plan to”, while also defending the feature as useful…

It does work in many cases and useful, such as here. Yanking it potentially makes search worse for other queries. So better if can improve. pic.twitter.com/QOe9vB4y7t — Danny Sullivan (@dannysullivan) November 7, 2017

“Bottom line: we want to show authoritative information. Much internal talk yesterday on how to improve tweets in search; more will happen,” he further tweeted. “In particular, we took deserved criticism after our Top Stories section carried misleading information after the Las Vegas shooting…. Top Stories should have correct info. So with the Texas shooting, we especially watched to see if Top Stories got it right.

“Answer: early changes put in place after Las Vegas shootings seemed to help with Texas. Incorrect rumors about some suspects didn’t get in. The tweets we carry in results should reflect useful information. We’re not happy with ourselves they didn’t, even if for a short time.”

“Right now, we haven’t made any immediate decisions. We’ll be taking some time to test changes and have more discussions, but not just talk,” he adds. “Google made changes to Top Stories and is still improving those. We’ll do same with tweets. We want to get this right.”

See https://t.co/wxMNofhgUf and https://t.co/lxW40kiVvB — issue isn't so much spotting stuff as much as fixing algos to have scale solution — Danny Sullivan (@dannysullivan) November 7, 2017

Sullivan also reveals the problem for Google was wider than just which tweets were being displayed in search results — though he did not confirm exactly what other issues he means, saying only: “We also had issues beyond tweets in search. I hope to share more about what we’re doing there. We want to get those right, as well.”

He has also packaged his commentary into a Twitter Moment — which you can see here.