In a new piece of research — carried out by investigators at Rutgers University, the University of North Carolina at Charlotte, Lehigh University, and the Chinese University of Hong Kong — neural networks have been used to generate high quality images based on nothing more detailed than basic text descriptions.

“Generating realistic images from text descriptions has many applications,” researcher Han Zhang told Digital Trends. “Previous approaches have difficulty in generating high resolution images, and their synthesized images in many cases lack details and vivid object parts. Our StackGAN for the first time generates 256 x 256 images with photo-realistic details.”

A video of the work was shared online by YouTuber Károly Zsolnai-Fehér as part of his excellent series of Two Minute Papers educational videos.

“For many years, we have trained neural networks to perform tasks like face, traffic sign, or handwriting recognition,” Zsolnai-Fehér told us. “Generally, with millions of training examples, we show the neural network how to do something, and expect them to learn these concepts, and do well on their own afterwards. This piece of work is completely different: here, after learning the neural networks are able to create something completely new — such as synthesizing new, photorealistic images from a piece of text we have written. This opens up a world of possibilities, and I am super-excited to see where researchers take this concept in the future.”

While there have certainly been examples of computational creativity before — ranging from MIT’s Nightmare Machine to projects that can generate predictive video simply by looking at a still image — this is nonetheless an intriguing piece of work. It’s also fascinating because the two-stage method of drawing images looks, to our way of thinking, a whole lot like the way artists will sketch out a piece of work, and then do a second pass to add detail.

We may still be a way from replacing human illustrators with robots, but this is nonetheless an exciting leap forward.

Editors' Recommendations