Scientists at MIT's LabSix, an artificial intelligence research group, tricked Google's image-recognition AI called InceptionV3 into thinking that a baseball was an espresso, a 3D-printed turtle was a firearm, and a cat was guacamole.

The experiment might seem outlandish initially, but the results demonstrate why relying on machines to identify objects in the real world could be problematic. For example, the cameras on self-driving cars use similar technology to identify pedestrians while in motion and in all sorts of weather conditions. If an image of a stop sign was blurred (or altered), an AI program controlling a vehicle could theoretically misidentify it, leading to terrible outcomes.

The results of the study, which were published online today, show that AI programs are susceptible to misidentifying objects in the real-world that are slightly distorted, whether manipulated intentionally or not.

AI scientists call these manipulated objects or images, such as turtle with a textured surface that might mimic the surface of a rifle, "adversarial examples."

"Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought," the scientists wrote in the published research.

The example of the 3D-printed turtle below proves their point. In the first experiment, the team presents a typical turtle to Google's AI program, and it correctly classifies it as a turtle. Then, the researchers modify the texture on the shell in minute ways — almost imperceptible to the human eye — which makes the machine identify the turtle as a rifle.

The striking observation in LabSix's study is that the manipulated or "perturbed" turtle was misclassified at most angles, even when they flipped the turtle over.

To create this nuanced design trickery, the MIT researchers used their own program specifically designed to create "adversarial" images. This program simulated real-world situations like blurred or rotating objects that an AI program could likely experience in the real-world — perhaps like the input an AI might get from cameras on fast-moving self-driving cars.

With the seemingly incessant progression of AI technologies and their application in our lives (cars, image generation, self-taught programs), it's important that some researchers are attempting to fool our advanced AI programs; doing so exposes their weaknesses.

After all, you wouldn't want a camera on your autonomous vehicle to mistake a stop sign for a person — or a cat for guacamole.