Like most deepdreamers, I’m getting pretty tired of the puppyslug nightmares. I was therefore thrilled when Google released code to use a “guide image” that I hoped would allow us to leave dogslug land.

As I mentioned in a previous post, I finally tweaked a dreamify.py script so that I could quickly cycle through guide images. This allowed me to test what different images do to different layers of the neural net.

I started with this original image of my dad at the Honolulu Zoo because it contained lots of different textures and contrasts:

I then ran it through four different layers with the 11 different guide images that I thought provided good variety. Here are the results, in the hopes that they’re useful to other dreamers. The guide image used is in the upper left.

Layer inception_3a/1x1, iter_n 25, all other settings default (view full-size):

For this layer, the guide image appears to strongly affect color and the tiny block-like shapes that are generated.

Layer inception_4a/1x1, iter_n 25, all other settings default (view full-size):

The color isn’t affected much, but the texture of the swirls changes greatly.

Layer inception_4d/pool_proj, iter_n 25, all other settings default (view full-size):

Very little impact at all, still very dogsluggy. The shape, texture, and placement of the dogslugs changes slightly. This seems to imply that any dogslug-heavy layer is going to stay that way, even with a guide image. Anyone experience something different with your experiments?

Layer pool4/3x3_s2, iter_n 40, all other settings default (view full-size):

Of the four I tested, this is probably my favorite layer to use a guide image. It’s very clear that the guide is actively included in the deepdream generation, and you can see remnants of squished kitty faces, mutilated hands, and steampunky bike parts.

Hope this helps!