The best predictions are always made in hindsight. Nostradamus achieved fame by writing down a lot of vague bollocks about the future, and relying on the human brain's incredible ability to spot links and patterns – even where none exist – to do the rest. His legacy is a pile of prophecies that are absolutely brilliant at predicting events, as long as they have already happened.

Now Nostradamus has a silicon rival, but while the French seer generated only 900 or so of his quatrains, the University of Tennessee's "Nautilus" supercomputer is capable of spewing out countless millions of predictions – enough to keep an army of cherry-pickers beavering away from now until eternity.

Nautilus trundles through hundreds of millions of news articles, applying sentiment analysis algorithms and place name detection to take a subject and correlate it to a location and a general mood. The theory goes that if "Brad Pitt" is mentioned in lots of French articles alongside positive words like "great" or "brilliant" then the French people must love him, but if "Hosni Mubarak" is associated with "evil" or "horrible" then it's boom-time for Egyptian pitch-fork manufacturers.

The researcher behind this, Kalev Leetaru, of the University of Illinois, Institute for Computing in the Humanities, Arts and Social Science speculated in his recent paper that by mining this information the system could predict anything from "forecasting impending conflict to offering insights on the locations of wanted fugitives."

These claims were amplified by the BBC, who score many bonus points for linking to the actual paper(!), but immediately lose them for using the phrase "scientists say" to refer to a single researcher (presumably they forgot to update their template) and for failing to explain that you probably have more chance of predicting Hosni Mubarak's next bowel movement than this supercomputer does of anticipating future revolutions.

I'm not a big fan of sentiment analysis, because every time I try to use it in a practical situation the results that come back are obviously, spectacularly wrong. To illustrate the problem, here's one I just did for mentions of 'Dorries' - a term some of you may be familiar with - on Twitter (you can see a longer version here).

Sentiment analysis for 'Dorries' on Twitter. Longer and bigger version at http://yfrog.com/keppnp

The sentiment analysis algorithm reckons that about half the mentions are positive. The sentiment analysis algorithm is talking bollocks. Tweets that the algorithm rates as positive about 'Dorries' include the following sentimental snippets:

"I think we should abolish Dorries"

"Dorries will lose her seat in the boundary reshuffle"

"She is clearly deluding herself" <---- if this is true I'll be very happy!

"#dingdongthewitchisgone"

"KEEP YOUR FILTHY HANDS OF MY [****] YOU HIDEOUS BITCH"

You can see the problem if you skim through the whole tweets. The algorithm has no concept of context, it just blindly assumes that if 'good' and 'Pepsi' appear in the same sentence it must be positive, even if that sentence is actually "good riddance, Pepsi."

Tone of coverage mentioning Egyptian President Hosni Mubarak, Summary of World Broadcasts 1979–2011. The Y-axis shows standard deviations from the mean. ( Source

You can see this problem in the paper. A chart (shown above) shows a big negative shift in reports about President Mubarak ahead of Egypt's revolution, as you'd expect. This is cited as evidence that negative shifts like this could predict revolution, but sentiment was even worse in 1981 when he started his 30-year reign. Were articles about Mubarak then really negative about him, or were they just negative in tone because they set his rise to power in the context of the assassination of his predecessor, President Sadat?

No attempt is made to investigate the correlation between these shifts in sentiment and subsequent revolutions. Sure, with hindsight we can see that Egypt's revolution followed a downward trend in media sentiment, but sentiment is very noisy, the scale of the shift is no worse than that seen in the USA between the late 60s and early 70s, and we didn't see a revolution there. Until someone can determine the probability of down trends preceding revolutions across a decent-sized sample, this is just subjectively interpreted anecdata.

What about the claim that this tool could predict the whereabouts of Osama bin Laden? The author tells us that:

"While far from a definitive lock on Bin Laden's location, global news content would have suggested Northern Pakistan in a 200 km. radius around Islamabad and Peshawar as his most likely location, and that he was nearly twice as likely to be making his residence in Pakistan as Afghanistan."

Visualization of geocoded Summary of World Broadcasts content, 1979–2011, mentioning “bin Laden”. ( Source

I don't doubt this, but I don't see that it's particularly useful or significant result either. A circle of radius 200km covers an area more than 125,000 square kilometers in size. The CIA located bin Laden to within a single compound, so I'm not sure they'll be rushing to pick up the phone. In any case, the paper fails to explain why an arbitrary range of 200km is a significant result, or to show why deploying a supercomputer is more effective than just asking a journalist who covers the beat.

The results presented in this paper don't really make much sense until a human comes along, looks at events, looks at the data, and subjectively interprets the crap out of them until they fit. It's all very interesting, and I don't want to disparage the author because they've generated a lot of very interesting data, but there isn't a single result in this work that demonstrates any ability to predict events - largely because no attempt at a serious statistical analysis has been made.

That's fine, it's a speculative result, but it's a shame the Beeb didn't pick up on this when they decided to run the story under the completely misleading headline 'Supercomputer predicts revolution'. Like Nostradamus, the predictions of Nautilus are only really successful in hindsight.

@mjrobbins | layscience@googlemail.com