Stop fixating on likelihood

The main bottleneck for a scientific study of creativity is that we have no preconception about the value of the objects we wish to generate. Artists play with knowledge, ideas, concepts, but most importantly, they invent their own value function. This fundamental issue of problem definition is very different of what we used to deal with in ML, where we are usually looking for unknown solutions for well-defined problems.

The goal of generative modeling in ML is to generate objects that are different but not very different from known (training) objects. The problem is commonly framed as probabilistic sampling from a fixed but unknown distribution. Objects are then evaluated by computing their likelihood under this distribution. The trouble is that the likelihood is unknown, and the development of surrogates is guided by heuristics about what it means to be “different but not very different”. Because of this, the concept of likelihood, in the case when all we have is a sample, is more of a self-imposed straitjacket than a constructive idea. Once we get rid of our narrow notion of likelihood, it turns out that our existing algorithms can also generate objects that are “quite different yet plausible”, exhibiting a dim sign of creativity.

A distance-preserving projection (t-SNE) of digits to a two-dimensional space. Colored clusters are original digit types (from 0 to 9). The gray dots are newly generated objects.

The 2D projection on the left shows that an autoencoder can generate new digits, but also other objects which look like digits in another world but whose likelihood, under the narrow ML goal of generative models, is presumably low. In the paper, we deliberately tuned the autoencoder to become more creative-explorative and to come up with new types of objects. At this point we used a human in the loop to validate what a new type is. Finding a way to automatically discover the value of these objects is the crux of computational creativity. At this point we have no solution, only a playground in which this question, novel to the ML community, can be studied.

Focus on the “what”, not the “how”

In a more general sense we would like to see more papers on the question of what we should do with deep nets instead of how to do it. Our culture problem solving is inherently biased towards incremental improvements on techniques that solve known problems. While pushing the state of the art is of course important, major breakthroughs usually come from inventing new uses. The obvious example is AlphaGo, usually seen as a great achievement in solving a hard problem whereas one could argue that their main contribution is to invent the problem at the first place: noticing that games, with infinite data and non-trivial tasks are a sweet spot for making deep learning shine. Creativity itself is as much about inventing our goals as solving them. Our argument has two sides: we push for more creativity in ML in inventing new problems while also going for algorithms that invent their own goals.

If you like what you read, follow me on Medium, LinkedIn, & Twitter.