When DeepMind’s AlphaGo artificial intelligence defeated Lee Sedol, the Korean Go champion, for the first time last year, it stunned the world. Many, including Sedol himself, didn’t expect an AI to have mastered the complicated board game, but it won four out of five matches—proving it could compete with the best human players. More than a year has passed, and today’s AlphaGo makes last year’s version seem positively quaint.

Google’s latest AI efforts push beyond the limitations of their human developers. Its artificial intelligence algorithms are teaching themselves how to code and how to play the intricate, yet easy-to-learn ancient board game Go.

This has been quite the week for the company. On Monday, researchers announced that Google’s project AutoML had successfully taught itself to program machine learning software on its own. While it’s limited to basic programming tasks, the code AutoML created was, in some cases, better than the code written by its human counterparts. In a program designed to identify objects in a picture, the AI-created algorithm achieved a 43 percent success rate at the task. The human-developed code, by comparison, only scored 39 percent on the task.

On Wednesday, in a paper published in the journal Nature, DeepMind researchers revealed another remarkable achievement. The newest version of its Go-playing algorithm, dubbed AlphaGo Zero, was not only better than the original AlphaGo, which defeated the world’s best human player in May. This version had taught itself how to play the game. All on its own, given only the basic rules of the game. (The original, by comparison, learned from a database of 100,000 Go games.) According to Google’s researchers, AlphaGo Zero has achieved superhuman-level performance: It won 100–0 against its champion predecessor, AlphaGo.

But DeepMind’s developments go beyond just playing a board game exceedingly well. There are important implications that could positively impact AI in the near future.

“By not using human data—by not using human expertise in any fashion—we’ve actually removed the constraints of human knowledge,” AlphaGo Zero’s lead programmer, David Silver, said at a press conference.

Until now, modern AIs have largely relied on learning from vast data sets. The bigger the data set, the better. What AlphaGo Zero and AutoML prove is that a successful AI doesn’t necessarily need those human-supplied data sets—it can teach itself.

This could be important in the face of our current consumer-facing AI mess. Written by human programmers and taught on human-supplied data, algorithms (such as the ones Google and Facebook use to suggest articles you should read) are subject to the same defects as their human overlords. Without that human interference and influence, future AI’s could be far superior to what we’re seeing employed in the wild today. A dataset can be flawed or skewed—for example, a facial recognition algorithm that has trouble with black faces because their white programmers didn’t feed it a diverse enough set of images. AI, teaching itself, wouldn’t inherently be sexist or racist, or suffer from those kinds of unconscious biases.

In the case of AlphaGo Zero, its reinforcement-based learning is also good news for the computational power of advanced AI networks. Early AlphaGo versions operated on 48 Google-built TPUs. AlphaGo Zero works on only four. It’s far more efficient and practical than its predecessors. Paired with AutoML’s ability to develop its own machine learning algorithms, this could seriously speed up the pace of DeepMind’s AI-related discoveries.

And while playing the game of Go may seem like a silly endeavor for an AI, it actually makes a lot of sense. AlphaGo Zero has to sort through a lot of complicated information to decide what moves to make in a game. (There are approximately 10170 positions you can make on a Go board.) As DeepMind co-founder Demis Hassabis told the Verge, AlphaGo Zero could be reprogrammed to sort through other kinds of data instead. This could include particle physics, quantum chemistry, or drug discovery. Like with playing Go, AlphaGo Zero could end up uncovering new techniques humans have overlooked or come to conclusions we hadn’t yet explored.

There’s a lot of reason to fear AI, but DeepMind’s AI’s aren’t programming themselves to destroy the human race. They’re programming themselves in a way that will shift some of the tedium off of human developers’ shoulders and look at problems and data sets in a fresh new light. It’s astonishing to think how far AI has come in just the past few years, but it’s clear from this week that progress is going to come even faster now.