I’ve just a read a fantastic New York Times article from last year on the ongoing $1,000,000 Netflix challenge to create an algorithm that will predict what unseen films customers will liked based on their past preferences.

As well as an interesting insight into how companies are trying to guess our shopping preferences it is also a great guide to one of the central problems in scientific psychology: how we can reconcile numerical data with human thought and behaviour.

The Netflix prize teams have a bunch of data from customers who have rated films they’ve already seen and they have been challenged to write software that predicts future ratings.

Part of this process is hypothesis testing, essentially an experimental approach to find out what might be important in the decision process. For example, a team might guess that women will rate musicals higher than men. They can then test this prediction out on the data, making further predictions based on past conclusions, theories or even just hunches.

The other approach is to use mathematical techniques that look for patterns in the data. To use the jargon, these procedures look for ‘higher order properties’ – in other words, patterns in the patterns of data.

Think of it like looking at the relationship between different forests rather than thinking of everything as individual trees.

The trouble is, is that these mathematical procedures can sometimes find reliable high level patterns when it isn’t obvious to us what they represent. For example, the article discusses the use of a technique called singular value decomposition (SVD) to categorise movies based on their ratings;

There‚Äôs a sort of unsettling, alien quality to their computers‚Äô results. When the teams examine the ways that singular value decomposition is slotting movies into categories, sometimes it makes sense to them ‚Äî as when the computer highlights what appears to be some essence of nerdiness in a bunch of sci-fi movies. But many categorizations are now so obscure that they cannot see the reasoning behind them. Possibly the algorithms are finding connections so deep and subconscious that customers themselves wouldn‚Äôt even recognize them. At one point, Chabbert showed me a list of movies that his algorithm had discovered share some ineffable similarity; it includes a historical movie, ‚ÄúJoan of Arc,‚Äù a wrestling video, ‚ÄúW.W.E.: SummerSlam 2004,‚Äù the comedy ‚ÄúIt Had to Be You‚Äù and a version of Charles Dickens‚Äôs ‚ÄúBleak House.‚Äù For the life of me, I can‚Äôt figure out what possible connection they have, but Chabbert assures me that this singular value decomposition scored 4 percent higher than Cinematch ‚Äî so it must be doing something right. As Volinsky surmised, ‚ÄúThey‚Äôre able to tease out all of these things that we would never, ever think of ourselves.‚Äù The machine may be understanding something about us that we do not understand ourselves.

In these cases, it’s tempting to think there’s some deeply psychological property of the film that’s been captured by the analysis. Maybe all trigger a wistful nostalgia, or perhaps each represents the same unconscious fantasy.

It could also be that each is under 90 minutes, or comes with free popcorn. It could even be that the grouping is entirely spurious and represents nothing significant. Importantly, the answer to these questions is not in the data to be discovered, we have to make the interpretation ourselves.

Experimental methods go from meaning to data, while exploratory methods go from data to meaning. Somewhere in the middle is our mind.

The Netflix challenge is this problem on steroids and the NYT piece brilliantly explores the practical problems in making sense of it all.

Link to NYT piece ‘If You Liked This, You‚Äôre Sure to Love That’