​

Or maybe you’d prefer to see owls:

​

And the best example of all, yoga poses:

​

Now, these are like human drawings, but they are not themselves drawn by any human. They are reconstructions of how a human might sketch such a thing. Some of them are quite good and others are less so, but they would all pretty much make sense if you were playing Pictionary with an AI.

SketchRNN is also built to accept input in the form of human drawings. You send something in and it tries to make sense of that. Working with a model trained on cat data, what would happen if you lobbed in a three-eyed cat drawing?

​

You see that? In the various outputs from the model to the right (again showing different “temperatures”), it strips out the third eye! Why? Because the model has learned that cats have triangular ears, two whiskers, a roundish face, and only two eyes.

Of course, the model does not have any idea what an ear actually is or if cats’ whiskers move or even what a face is or that our eyes can transmit images to our brains because photons change the shape of the protein rhodopsin in specialized cells in the retina. It knows nothing of the world to which these sketches refer.

But it does know something about how humans represent cats or pigs or yoga or sailboats.

“When we start generating a drawing of a sailboat, the model will fill in with hundreds of other models of sailboats that could come from that drawing,” Google’s Eck told me. “And they all kind of make sense to us because the model has pulled out from all this training data the platonic sailboat—you’ll kill me for saying this—but the ur sailboat. It’s not a question of specific sailboats, but sailboatness.”

As soon as he said it, he seemed to regret his momentary loftiness. “I’m gonna have the philosophers come crush me for that,” he said. “But as a handwavey thing, it makes sense.” (The Atlantic’s resident philosopher Ian Bogost told me, “Philosophically speaking, this is pure immanent materialism.”)

The excitement of being a part of the artificial intelligence movement, the most exciting technological project ever conceived, at least to those within it, and to a lot of other people, too—well, it can get the better of even a Doug Eck.

I mean, train a network on drawings of rain. Then input a sketch of a fluffy cloud, and, well, it does this:

​

Rain falls out of the cloud you’ve sent into the model. That’s because many people draw rain by first drawing a cloud and then drops coming out of it. So if the neural network sees a cloud, it makes rain fall out of the bottom of that shape. (Interestingly, the data is a succession of strokes, though, so if you start with the rain, the model will not produce a cloud.)

It’s delightful work, but in the long project to reverse engineer how humans think, is this a clever side project or a major piece of the puzzle?