The sunny weather in California is ideal for training self-driving cars, but it does have its drawbacks. After all, if your autonomous vehicle has only ever driven in perfect visibility, what happens when it runs into a bit of rain or snow? Researchers at Nvidia might have a solution, publishing details this week of an AI framework that lets computers imagine what a sunny street looks like when it’s raining, snowing, or even pitch-black outside. That’s important information for self-driving cars, but the work could have many more applications besides.

The research is based on an AI method that’s particularly good at generating visual data: a generative adversarial network, or GAN. GANs work by combining two separate neural networks — one that makes the data, and another that judges it; rejecting samples that don’t look accurate. In this way, the AI teaches itself to generate better and better results over time. This sort of program is common in the industry, and has been used to create all sorts of imagery, from fake celebrity faces to new clothing designs to nightmarish cats.

Nvidia’s research, though, has one big advantage over existing GANS: it learns with much less supervision. Generally, programs of this sort need labelled datasets to generate data. As Nvidia researcher Ming-Yu Liu explained to The Verge, this means that if you’re making a GAN that turns a daytime scene into a nighttime one, you’d need to feed it pairs of images taken at the same location at night and day. It would then study the difference between the two to generate new examples.

But Nvidia’s new program doesn’t need this prep-work — it works without labelled datasets, but manages to produce results of similar quality. This could be a major advantage for AI researchers, as it frees up time they would otherwise have to dedicate to sorting their training data.

“We are among the first to tackle the problem,” Ming-Yu told The Verge. “[And] there are many applications. For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”

And the program doesn’t just work translating pictures of streets, of course. Ming-Yu and his colleagues also tested it on pictures of cats and dogs, turning pictures of one breed into another; and used it to change the expression of peoples’ faces in photographs. It’s similar to the technology used in face-changing apps like FaceApp, and, like other research in this area, raises fears about AI being used to create fake imagery that will trick people online.

“This work can be used for image editing,” suggests Ming-Yu, although he adds that there are no concrete applications for the program just yet. “We’re making this research available to our product teams and customers. I can’t comment on the speed or extent of their adoption.”

You can read the research paper in full here, and the work is also being presented this week at the NIPS AI conference in Long Beach, California.