Last year, Google's AI AlphaGo beat Korean Lee Sedol in Go, a game many expected humans to continue to dominate for years, if not decades, to come.

With the 37th move in the match’s second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world’s best Go players, including Lee Sedol. “That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other. Lee Sedol, after leaving the match room, took nearly fifteen minutes to formulate a response. Fan Gui—the three-time European Go champion who played AlphaGo during a closed-door match in October, losing five games to none—reacted with incredulity. But then, drawing on his experience with AlphaGo—he has played the machine time and again in the five months since October—Fan Hui saw the beauty in this rather unusual move.



Indeed, the move turned the course of the game. AlphaGo went on to win Game Two, and at the post-game press conference, Lee Sedol was in shock. “Yesterday, I was surprised,” he said through an interpreter, referring to his loss in Game One. “But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading.”



The first time Gary Kasparov sensed deep intelligence in Deep Blue, he described the computer's move as a very human one.

I GOT MY FIRST GLIMPSE OF ARTIFICIAL INTELLIGENCE ON Feb. 10, 1996, at 4:45 p.m. EST, when in the first game of my match with Deep Blue, the computer nudged a pawn forward to a square where it could easily be captured. It was a wonderful and extremely human move. If I had been playing White, I might have offered this pawn sacrifice. It fractured Black's pawn structure and opened up the board. Although there did not appear to be a forced line of play that would allow recovery of the pawn, my instincts told me that with so many "loose" Black pawns and a somewhat exposed Black king, White could probably recover the material, with a better overall position to boot.



But a computer, I thought, would never make such a move. A computer can't "see" the long-term consequences of structural changes in the position or understand how changes in pawn formations may be good or bad.



Humans do this sort of thing all the time. But computers generally calculate each line of play so far as possible within the time allotted. Because chess is a game of virtually limitless possibilities, even a beast like Deep Blue, which can look at more than 100 million positions a second, can go only so deep. When computers reach that point, they evaluate the various resulting positions and select the move leading to the best one. And because computers' primary way of evaluating chess positions is by measuring material superiority, they are notoriously materialistic. If they "understood" the game, they might act differently, but they don't understand.



So I was stunned by this pawn sacrifice. What could it mean? I had played a lot of computers but had never experienced anything like this. I could feel--I could smell--a new kind of intelligence across the table. While I played through the rest of the game as best I could, I was lost; it played beautiful, flawless chess the rest of the way and won easily.



Later, in the Kasparov-Deep Blue rematch that IBM's computer won, again a move in the 2nd game was pivotal. There is debate or whether the move was a mistake or intentional on the part of the computer, but it flummoxed Kasparov (italics mine):

'I was not in the mood of playing at all,'' he said, adding that after Game 5 on Saturday, he had become so dispirited that he felt the match was already over. Asked why, he said: ''I'm a human being. When I see something that is well beyond my understanding, I'm afraid.''



...



At the news conference after the game, a dark-eyed and brooding champion said that his problems began after the second game, won by Deep Blue after Mr. Kasparov had resigned what was eventually shown to be a drawn position. Mr. Kasparov said he had missed the draw because the computer had played so brilliantly that he thought it would have obviated the possibility of the draw known as perpetual check. ''I do not understand how the most powerful chess machine in the world could not see simple perpetual check,'' he said. He added he was frustrated by I.B.M.'s resistance to allowing him to see the printouts of the computer's thought processes so he could understand how it made its decisions, and implied again that there was some untoward behavior by the Deep Blue team. Asked if he was accusing I.B.M. of cheating, he said: ''I have no idea what's happening behind the curtain. Maybe it was an outstanding accomplishment by the computer. But I don't think this machine is unbeatable.'' Mr. Kasparov, who defeated a predecessor of Deep Blue a year ago, won the first game of this year's match, but it was his last triumph, a signal that the computer's pattern of thought had eluded him. He couldn't figure out what its weaknesses were, or if he did, how to exploit them.

Legend has it that a move in Game One and another in Game Two were actually just programming glitches that caused Deep Blue to make random moves that threw Kasparov off, but regardless, the theme is the same: at some point he no longer understood what the program was doing. He no longer had a working mental model, like material advantage, for his computer opponent.

This year, a new version of AlphaGo was unleashed on the world: AlphaGo Zero.

As many will remember, AlphaGo—a program that used machine learning to master Go—decimated world champion Ke Jie earlier this year. Then, the program’s creators at Google’s DeepMind let the program continue to train by playing millions of games against itself. In a paper published in Nature earlier this week, DeepMind revealed that a new version of AlphaGo (which they christened AlphaGo Zero) picked up Go from scratch, without studying any human games at all. AlphaGo Zero took a mere three days to reach the point where it was pitted against an older version of itself and won 100 games to zero.



(source)

That AlphaGo Zero had nothing to learn from playing the world's best humans, and that it trounced its artificial parent 100-0, is evolutionary velocity of a majesty not seen since the ectomorphs in the Alien movie franchise. It is also, in its arrogance, terrifying.

DeepMind released 55 games that a previous version of AlphaGo played against itself for Go players around the world to analyze.

Since May, experts have been painstakingly analyzing the 55 machine-versus-machine games. And their descriptions of AlphaGo’s moves often seem to keep circling back to the same several words: Amazing. Strange. Alien.



“They’re how I imagine games from far in the future,” Shi Yue, a top Go player from China, has told the press. A Go enthusiast named Jonathan Hop who’s been reviewing the games on YouTube calls the AlphaGo-versus-AlphaGo face-offs “Go from an alternate dimension.” From all accounts, one gets the sense that an alien civilization has dropped a cryptic guidebook in our midst: a manual that’s brilliant—or at least, the parts of it we can understand.



[...]



Some moves AlphaGo likes to make against its clone are downright incomprehensible, even to the world’s best players. (These tend to happen early on in the games—probably because that phase is already mysterious, being farthest away from any final game outcome.) One opening move in Game One has many players stumped. Says Redmond, “I think a natural reaction (and the reaction I’m mostly seeing) is that they just sort of give up, and sort of throw their hands up in the opening. Because it’s so hard to try to attach a story about what AlphaGo is doing. You have to be ready to deny a lot of the things that we’ve believed and that have worked for us.” Like others, Redmond notes that the games somehow feel “alien.” “There’s some inhuman element in the way AlphaGo plays,” he says, “which makes it very difficult for us to just even sort of get into the game.”



Ke Jie, the Chinese Go master who was defeated by AlphaGo earlier this year, said:

“Last year, it was still quite humanlike when it played. But this year, it became like a god of Go.”



After his defeat, Ke posted what might be the most poetic and bracing quote of 2017 on Weibo (I first saw it in the WSJ):

“I would go as far as to say not a single human has touched the edge of the truth of Go.”



***

When Josh Brown died in his Tesla after driving under a semi, it kicked off a months long investigation into who was at fault. Ultimately, the NHTSA absolved Autopilot of blame. The driver was said to have had 7 seconds to see the semi and apply the brakes but was suspected of watching a movie while the car was in Autopilot.

In this instance, it appeared enough evidence could be gathered to make such a determination. In the future, diagnosing why Autopilot or other self-driving algorithms made certain choices will likely only become more and more challenging as the algorithms rise in complexity.

At times, when I have my Tesla in Autopilot mode, the car will do something bizarre and I'll take over. For example, if I drive to work out of San Francisco, I have to exit left and merge onto the 101 using a ramp that arcs to the left almost 90 degrees. There are two lanes on that ramp, but even if I start in the far left lane and am following a car in front of me my car always seems to try to slide over to the right lane.

Why does it do that? My only mental model is the one I know, which is my own method for driving. I look at the road, look for lane markings and other cars, and turn a steering wheel to stay in a safe zone in my lane. But thinking that my car drives using that exact process says more about my limited imagination than anything else because Autopilot doesn't drive the way humans do. This becomes evident when you look at videos showing how a self-driving car "sees" the road.

When I worked at Flipboard, we moved to a home feed that tried to select articles for users based on machine learning. That algorithm continued to be to tweaked and evolved over time, trying to optimize for engagement. Some of that tweaking was done by humans, but a lot of it was done by ML.

At times, people would ask why a certain article had been selected for them? Was it because they had once read a piece on astronomy? Dwelled for a few seconds on a headline about NASA? By that point, the algorithm was so complex it was impossible to really offer an explanation that made intuitive sense to a human, there were so many features and interactions in play.

As more of the world comes to rely on artificial intelligence, and as AI makes great advances, we will walk to the edge of a chasm of comprehension. We've long thought that artificial intelligence might surpass us eventually by thinking like us, but better. But the more likely scenario, as recent developments have shown us, is that the most powerful AI may not think like us at all, and we, with our human brains, may never understand how they think. Like an ant that cannot understand a bit about what the human towering above them is thinking, we will gaze into our AI in blank incomprehension. We will gaze into the void. The limit to our ability to comprehend another intelligence is our ability to describe its workings, and that asymptote is drawn by the limits of our brain, which largely analogizes all forms of intelligence to itself in a form of unwitting intellectual narcissism.

This is part of the general trend of increasing abstraction that marks modern life, but it is different than not knowing how a laptop is made, or how to sew a shirt for oneself. We take solace in knowing that someone out there can. To admit that it's not clear to any human alive how an AI made a particular decision feels less like a ¯\_(ツ)_/¯ and more like the end of some innocence.

I suspect we'll continue to tolerate that level of abstraction when technology functions as we want it to, but we'll bang our heads in frustration when it doesn't. Like the annoyance we feel when we reach the limits of our ability to answer a young child who keeps asking us "Why?" in recursive succession, this frustration will cut deep because it will be indistinguishable from humiliation.