Don’t challenge this algorithm to a board game. Because chances are it can learn to outsmart you inside a day.

Earlier this year, we reported that Alphabet’s machine-learning subsidiary, DeepMind, had made a huge advance. Using an artificial-intelligence approach known as reinforcement learning, it had enabled its AlphaGo software to develop superhuman skills for the game of Go without needing human data. Armed with just the rules of the game, the AI was able to make random plays until it developed champion-beating strategies. The new software was dubbed AlphaGo Zero because it didn’t need any human input.

Now, in a paper published on arXiv, the DeepMind team reports that the software has been generalized so that it can learn other games. It describes two new examples in which AlphaGo Zero was unleashed on the games of chess and shogi, a Japanese game that’s similar to chess. In both cases the software was able to develop superhuman skills within 24 hours, and then “convincingly defeated a world-champion program.”

It’s perhaps not too surprising that the AI was able to pick up killer skills for the two games so quickly: both chess and shogi are less complex than Go. But DeepMind’s ability to generalize the software, so that it can master different games, hints at increasingly adaptable kinds of machine intelligence.

That said, there are still games that AI hasn’t yet mastered. Perhaps the biggest challenge—which DeepMind is already working on—lies in massively complex online strategy games like Starcraft, which humans are still superior at. As we’ve explained in the past, machines will need to develop new skills, such as memory and planning, in order to steal away that crown. But don’t expect it to take too long.