Along America’s west coast, the world’s most valuable companies are racing to make artificial intelligence smarter. Google and Facebook have boasted of experiments using billions of photos and thousands of high-powered processors. But late last year, a project in eastern Tennessee quietly exceeded the scale of any corporate AI lab. It was run by the US government.

The record-setting project involved the world’s most powerful supercomputer, Summit, at Oak Ridge National Lab. The machine captured that crown in June last year, reclaiming the title for the US after five years of China topping the list. As part of a climate research project, the giant computer booted up a machine-learning experiment that ran faster than any before.

Summit, which occupies an area equivalent to two tennis courts, used more than 27,000 powerful graphics processors in the project. It tapped their power to train deep-learning algorithms, the technology driving AI’s frontier, chewing through the exercise at a rate of a billion billion operations per second, a pace known in supercomputing circles as an exaflop.

“Deep learning has never been scaled to such levels of performance before,” says Prabhat, who leads a research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab. (He goes by one name.) His group collaborated with researchers at Summit’s home base, Oak Ridge National Lab.

Fittingly, the world’s most powerful computer’s AI workout was focused on one of the world’s largest problems: climate change. Tech companies train algorithms to recognize faces or road signs; the government scientists trained theirs to detect weather patterns like cyclones in the copious output from climate simulations that spool out a century's worth of three-hour forecasts for Earth’s atmosphere. (It’s unclear how much power the project used or how much carbon that spewed into the air.)

Summit's racks of equipment are connected by over 185 miles of fiber-optic cable and circulate 4,000 gallons of water a minute to cool the machine's 37,000 processors. Carlos Jones/Oak Ridge National Lab

The Summit experiment has implications for the future of both AI and climate science. The project demonstrates the scientific potential of adapting deep learning to supercomputers, which traditionally simulate physical and chemical processes such as nuclear explosions, black holes, or new materials. It also shows that machine learning can benefit from more computing power—if you can find it—boding well for future breakthroughs.

“We didn’t know until we did it that it could be done at this scale,” says Rajat Monga, an engineering director at Google. He and other Googlers helped the project by adapting the company’s open-source TensorFlow machine-learning software to Summit’s giant scale.