For some reason, we often think of computers as infallible — subjective, logical, rational, and nearly always right. There is something about a computer’s lack of emotion and intelligence that makes them strangely trustworthy — while, on the other hand, despite their massive intelligence, we all know that humans are deeply flawed and prone to all sorts of biases. As it turns out, computers are deeply flawed as well. Take optical illusions, for example — you might think that optical illusions are the result of the inadequate aqueous orbs we call eyes, but a new study shows that even the most advanced computer vision systems can be easily tricked by an optical illusion. Obviously, this doesn’t bode well for the future when the world is populated by robot soldiers, police, and workers.

If we program a computer to do something, we expect it to perform that action correctly time and time again — a fairly rational assumption, when cold, hard, objective code is running the show. Yes, software and hardware sometimes have bugs, but generally if a program or robot or some other application of technology is designed to do something, there’s an (irrational?) belief that it will be fit for the task. Obviously, though, that’s a silly belief: Technology is only ever as infallible as the human that created it.

Case in point: Optical illusions. Over the last couple of years, we’ve reported on a few computer vision systems that are becoming exceedingly good at identifying objects — really, they’re almost as good as humans now. You would think that these systems, because they’re computers, wouldn’t be vulnerable to optical trickery — to a computer, an apple is an apple, irrespective of checkered zones of contrasting colors… right? Sadly not. It turns out that computers are just as prone to optical illusions, if not more so, than our own brains and eyes.

The study, carried out by researchers at the University of Wyoming, took one of the best deep neural networks (DNN) — AlexNet — and proceeded to trick it into incorrectly identifying all sorts of weird patterns. To create the images, the researchers used two different genetic algorithms — one that ultimately created noise, and another that created some rather interesting patterns. In both cases, the genetic algorithms started with a real image (a penguin, say) — and then slowly evolved the image, each time checking to make sure the neural network still recognized it. [Research paper: arXiv:1412.1897]

Through every evolution, the algorithm added a small random mutation, slowly deforming the image so that human eyes can no longer make anything out. In the case of the white-noise images, you can kind of see an object in the middle (but you’d still be hard pressed to identify the objects without the labels underneath). In the later, more abstract patterns, I have no idea where the baseball or peacock have gone. I can kind of see how a computer might see a remote control, or electric guitar… but… yeah. The image at the top of the story shows you some more images that were generated by the evolutionary algorithms, just to show you how large the variety is (the neural network still incorrectly identified these with 99% certainty).

The point of the study was to prove that we really shouldn’t rely on these modern — and seemingly very accurate — computer vision systems. Humans and computers see things in very different ways, and it’s clearly very easy to fool a computer into seeing something that isn’t actually there. This is obviously a big hit to computer vision systems, which are just starting to hit the mainstream — Facebook, the FBI, and numerous other interests will be deeply upset that their facial recognition algorithms are actually rather easy to trick. On a more general level, though, this is bad news for deep neural networks, which are currently one of the leading ways of creating “brain-like” general artificial intelligences.

Hopefully a future study will find a way of protecting DNNs against such flaws — but judging by the “black box” nature of DNNs, I have a feeling that might be rather hard. Obviously, we’ll have to find a solution before fielding robot soldiers or workers that can have their artificial visual cortices so easily be bamboozled. It’s highly likely that the computer vision systems used by autonomous cars are also vulnerable to the same attack, incidentally…

Now read: The first human brain-to-brain interface has been created. In the future, will we all be linked telepathically?