Tech giants love to tout how good their computers are at identifying what’s depicted in a photograph. In 2015, deep learning algorithms designed by Google, Microsoft, and China’s Baidu superseded humans at the task, at least initially. This week, Facebook announced that its facial-recognition technology is now smart enough to identify a photo of you, even if you’re not tagged in it.

But algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.

While a panda-gibbon mix-up may seem low stakes, an adversarial example could thwart the AI system that controls a self-driving car, for instance, causing it to mistake a stop sign for a speed limit one. They’ve already been used to beat other kinds of algorithms, like spam filters.

Those adversarial examples are also much easier to create than was previously understood, according to research released Wednesday from MIT’s Computer Science and Artificial Intelligence Laboratory. And not just under controlled conditions; the team reliably fooled Google’s Cloud Vision API, a machine learning algorithm used in the real world today.

An adversarial example could thwart the AI system that controls a self-driving car, causing it to mistake a stop sign for a speed limit one.

Previous adversarial examples have largely been designed in “white box” settings, where computer scientists have access to the underlying mechanics that power an algorithm. In these scenarios, researchers learn how the computer system was trained, information that helps them figure out how to trick it. These kinds of adversarial examples are considered less threatening, because they don’t closely resemble the real world, where an attacker wouldn’t have access to a proprietary algorithm.

For example, in November another team at MIT (with many of the same researchers) published a study demonstrating how Google’s InceptionV3 image classifier could be duped into thinking that a 3-D-printed turtle was a rifle. In fact, researchers could manipulate the AI into thinking the turtle was any object they wanted. While the study demonstrated that adversarial examples can be 3-D objects, it was conducted under white-box conditions. The researchers had access to how the image classifier worked.

But in this latest study, the MIT researchers did their work under “black box” conditions, without that level of insight into the target algorithm. They designed a way to quickly generate black-box adversarial examples that are capable of fooling different algorithms, including Google’s Cloud Vision API. In Google’s case, the MIT researchers targeted the part of the system of that assigns names to objects, like labeling a photo of a kitten “cat.”