Even people who aren’t fans of sci-fi or avid A.I. enthusiasts know that in 1997, IBM’s “Deep Blue” chess computer defeated the then world chess champion at the time, Garry Kasparov, but relatively few of us are aware of the fascinating developments in other narrow domains of skill where artificial intelligence is gaining ground at incredible speed.

1. Rocks, Paper, Scissors

Researchers at Ishikawa Watanabe Laboratory trained a robotic system to win 100% of all games of “rocks, paper, scissors” against humans. A camera tracks the motions of the opponent’s hand, recognizing minute and nearly indistinguishable changes in the hand (within 1 millisecond) in order to counter with the right response.

I’d like to see what happens when you put together two of these robots to face off. The researchers see this see this as an opportunity to for human-machine collaboration, with future applications allowing for robots to pick up on human movement to aide in a working task.

2. Super Mario

“Neuro-evolution” was used to develop a Super Mario-playing artificial intelligence that can positively fly through levels at amazing speeds. The machine has broken down the important element of the game (including platforms, walls, enemies) into different, identifiable symbols that it uses to navigate the levels of gameplay with ease.

This isn’t the first time we’ve talked about “evolving” AI behavior here at Emerj, but it’s probably the first time it’s been brought up the context of gaming (a topic that we admittedly don’t cover much of), and it’s an interesting expose of the potential power of neural networks and machine learning.

Though this isn’t the first time we’ve discussed the “evolution” of artificial intellignece behavior (the most recent was this article with Dr. Jekan Thanga on “evolving” the behavior of robots), it’s probably the first time we’ve ever talked about the concept applied to gaming. Not video proves to be a fun expose into the possibilities of narrow AI, and using gaming as a petri dish for machine learning.

3. Go

Even after computers trounced the best human players in backgammon (1979) ,checkers (1994), and chess (1997), some experts speculated that the complexities of the 2500-year-old game of go could not be mastered by a machine. Unlike with chess, today’s best go computers cannot regularly defeat the absolute best human players – but they’re taking on expert players and winning – a feat that would have been close to inconceivable just 10 years ago.

Even when backgammon (1979), checkers (1994), and chess (1997) titles were taken over by machines, the 2500-year-old game of go was proud that their game was a demonstration of the higher powers of human cognition and creativity… and that the game might be impermeable to machine opponents who were worthy of being taken seriously. A lot of that has now changed.

At this point in AI’s development, even skeptics are beginning to consider that human victories won’t be the norm forever. An article in MIT Technology Review put it more bluntly: “One of the last bastions of human mastery over computers is about to fall to the relentless onslaught of machine learning algorithms.

Part of me feels disappointed that the superiority of the human mind is under siege… but the upside is that when computers can write better articles than humans (they’re coming close), I’ll have a lot of time freed up in the morning.