“I’m not a tinfoil hat kind of guy for the most part,” Krebs told me, “but it was very clear that the tinfoil hat people were going to have a field day with this.” And in some ways it's the perfect conspiracy theory, because you can't prove what's going on either way. Without Google’s help—which they haven’t yet offered—there’s no way to know why the translate algorithm connected “lorem lorem” to “China’s Internet.”

And translating “lorem” to “China” does seems like something more than just garbage in, garbage out. It may not be the dark Internet, but it also doesn’t seem to be entirely random. One explanation could have to do with the text the algorithm uses to generate its translations. Google Translate works by drawing from vast banks of text, searching for patterns in language use to match future translation requests. Some of those texts include documents from the United Nations and the European Union that have to be translated into multiple languages. It’s possible that if either entity uses lorem ipsum placeholder text in a document, Google might think that it’s looking at the “Latin translation” of the text.

Another potential culprit could be programmers involved with the DefCon Badge project—teams who spend hours hacking projects and puzzles. "If anybody was going to to go through the trouble of trying to game the results it would be those guys," Krebs says.

While it's possible that something like this could happen randomly based on the law of large numbers and just how much text Google Translate is dealing with, not everybody is convinced this is accidental. "Things like this in isolation are very unlikely," says Pedro Domingos, a machine-learning researcher at the University of Washington. And he points out that tricking Google into encrypting your own cypher like this wouldn't be impossible—it simply involves putting up a wall of dummy text and its translation for Google to trawl and learn from. "My guess would be that there is something non-accidental here. Exactly what it was we may not ever find out."

The real answer is probably that Google Translate simply isn’t perfect. Krebs is relatively convinced that this is just a blip in machine learning—that the algorithm simply doesn’t have enough new Latin documents to pull from to help it make sense of Latin text. So when we feed it nonsense text it does the best it can to make meaning from it—to find the connections it thinks we’re seeking from the bank of information it has. “It doesn’t have enough to go on, and in an attempt to impress its creators, it's trying to figure it out on its own,” he says.

Humans are good at this kind of patterning and meaning making out of nonsensical data, too. "Lorem ipsum" is used because it is meaningless, but we assume the information we get back from Google must be meaningful, so we try to map what it means back onto the results we’ve got. Which is how we end up here—wondering whether a failure in Google Translate is actually a secret Chinese code.

Then again, maybe it is. Krebs reminds me of the famous Joseph Heller quote: “Just because you're paranoid doesn't mean they aren't after you.”

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.