For years, artists and researchers have been experimenting with training neural networks to generate images that look real. But most of them look like strangely distorted, grotesque caricatures of how a computer thinks the world looks.

No longer. Over the weekend, a Google intern and two researchers from Google’s DeepMind division released a paper, currently under review for a 2019 conference, featuring AI-generated images that blow everything else out of the water. Based on the small thumbnails, it’s almost impossible to tell that they’re not real images: There’s a chestnut-colored dog with his tongue hanging out, a beautiful ocean vista, a monarch butterfly, and a juicy hamburger complete with melted cheese and a bun that looks like it was brushed with butter. The textures of the images, from the dog’s fur to the hamburger’s juices, are incredibly realistic, with careful study revealing only tiniest of tells that the image isn’t a real one.

The research is making waves in the research community, where some expressed shock at the image quality. Oriol Vinyals, a research scientist at DeepMind, wondered if the images were the “best GAN samples ever.” “I want to live in a #BIGGAN generated world!” wrote Meltem Atay, a neurotechnology PhD student who focuses on machine learning. One noted that the images are “unbelievably detailed,” and another asked, “Wait . . . these are generated images?”

The algorithm that did this? It’s called BigGAN, the last three letters of which stand for generative adversarial network. This kind of neural net is composed of two models: one that conjures random images out of random numbers, and one that compares these generated images to real images and tells the generator just how far off it is. GANs are common in machine learning research, and BigGAN isn’t that different from other algorithms out there. But there is one big difference: BigGAN throws a ton of computational power, courtesy of Google, at the problem.

This strategy produces far superior results–while raising questions about how much energy machine learning is consuming.

“The main thing these models need is not algorithmic improvements, but computational ones,” says Andrew Brock, a PhD student at Edinburgh Center for Robotics and the Google intern who wrote the paper. “When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect.”

In other words, by adding more nodes to increase the complexity of the neural network and showing the model far more images than most researchers do, Brock was able to create a system that more accurately understands and models textures, and then combines these individual textures to generate bigger forms, like that of a puppy.