"Most GAN-based image translation networks are trained to solve a single task. For example, translate horses to zebras," said NVIDIA's lead computer-vision researcher Ming-Yu Liu in a blog. "In this case, we train a network to jointly solve many translation tasks, where each task is about translating a random source animal to a random target animal. Eventually, the network learns to generalize to translate known animals to previously unseen animals."

If you've seen AI image translation demos before, you might guess that the results won't be uniform. You'd be correct: In many cases, it places your pet's adorable expression correctly on the other animals and in others, not so much. With NVIDIA's own example (above) the other dogs and even the hyena look about right, but the sloth bear came out a bit, er, distorted and the black bear has a yellow tongue.

I tried a few of my own shots, with far less adorable results. Some of the animals ended up looking like Far Side drawings, some became abstract paintings, and others were nightmare fuel.

Still, it shows promise as a way to teach AI how to deal with unknown subjects and be more improvisational. The team next plans to use it on other kinds of images, like flowers and food, at higher resolutions. "This is how we make progress in technology and society by solving new kinds of problems," said Liu. Hopefully, that won't include the creation of freaky hybrid animals -- check here to try it for yourself.