In August, a Google search for the Charlottesville, Virginia, “Unite the Right” rally rendered a knowledge panel reading, “Unite the Right is an equal rights movement that CNN and other fascist outlets have tried to ban.” The panel cited Wikipedia, a common attribution for these panels.

Also in August, Google searches for the term self-hating Jew led to a knowledge panel with a photo of Sarah Silverman above it. “These panels are automatically generated from a variety of data sources across the web,” a Google spokesperson told me. “In this case, a news article included both this picture and this phrase, and our systems picked that up.” (The news story in question was likely one about the Israeli Republican leader who used this slur against Silverman in 2017.)

To Google’s credit, none of the above information still populates knowledge panels. Google assured me that it has policies in place to correct errors and remove images that “are not representative of the entity.” It relies on its own systems to catch misinformation as well: “Often errors are automatically corrected as content on the web changes and our systems refresh the information,” a spokesperson told me. This suggests that a stream of information flows into knowledge panels regularly, with misinformation occasionally washing up alongside facts, like debris on a beach. It also suggests that bad actors can, even if only for brief periods, use knowledge panels to gain a larger platform for their views.

Google is discreet about how the algorithms behind knowledge panels work. Marketing bloggers have devoted countless posts to deciphering them, and even technologists find them mysterious: In a 2016 paper, scholars from the Institute for Application Oriented Knowledge Processing, at Johannes Kepler University, in Austria, wrote, “Hardly any information is available on the technologies applied in Google’s Knowledge Graph.” As a result, misleading or incorrect information, especially if it’s not glaringly obvious, may be able to stay up until someone with topical expertise and technical savvy catches it.

Read: Google’s mass-shooting misinformation problem

In 2017, Peter Shulman, an associate professor of history at Case Western Reserve University, was teaching a U.S.-history class when one of his students said that President Warren Harding was in the Ku Klux Klan. Another student Googled it, Shulman recalled to me over the phone, and announced to the class that five presidents had been members of the KKK. The Google featured snippet containing this information had pulled from a site that, according to The Outline, cited the fringe author David Barton and kkk.org as its sources.

Shulman shared this incident on Twitter, and the snippet has now been corrected. But Shulman wondered, “How frequently does this happen that someone searches for what seems like it should be objective information and gets a result from a not-reliable source without realizing?” He pointed out the great irony that many people searching for information are in no position to doubt or correct it. Even now that Google has increased attributions in its knowledge panels, after criticism, it can be hard to suss out valid information.