The nominees are…

It’s awards season in Hollywood, and, as often happens, a man puts a gun to your head just before your Oscar party and presents this scenario:

There are two groups who will predict the six major categories of the Academy Awards. If you want to live, tell me which one will fare better.

The first group is FiveThirtyEight, a nationally acclaimed polling aggregation website. True to their ethos, 538 will examine 25 years’ worth of historical data from other awards show to determine their skill as Oscar predictors, tally their 2015 awardees, and then make their Oscar picks accordingly. This is the same exhaustive method that 538 famously used to to correctly predict the winner of the 2012 Presidential election in all 50 states and the District of Columbia.

The other group will be comprised of 7 average people using UNU to agree on a set of predictions. These people are neither mathematicians nor movie-goers. Six out of the seven of them have seen fewer than two of the nominees for Best Picture. They know nothing about the other nominees except what they have heard second-hand from friends or in the media. They will base their Oscar predictions on information like “I heard Birdman was pretty crazy…” and “Wait, what’s Whiplash about again?”

So, with a gun to your head, who is it gonna be?

Any rational person is going to choose FiveThirtyEight for all the obvious reasons. Not only do they have an incredible track record of accurate forecasting, they have a meticulous strategy that breaks the art of predicting the Oscars down into a science.

In contrast, the seven average people have little information on which to make a prediction. Not only have they not seen the movies in question, they are blissfully unaware of the existence of the other awards shows, to say nothing of those shows’ 25-year track record of Oscar predicting success. The Group of Seven seems to be throwing darts at the wall.

The only twist is that the Group of Seven will use UNU™, a new social platform that pools their input in real-time to produce a Collaborative A.I. that answers for the group as a whole. The basic premise of the UNU platform is that many minds are better than one, even if those many minds individually lack all the information.

And the Oscar goes to… UNU?

At this point, you can humbly request that your tormentor put the gun down.

Not only did the Group of Seven using UNU predict the exact same number of correct winners – 5 out of 6 correct – as did the number-crunchers at 538, they predicted the exact same winners.

The takeaway is not that 538 suddenly forgot how to do their jobs. Nor did Unum’s Group of Seven somehow throw bullseyes out of nowhere.

What happened last night is an example of Collaborative Intelligence – the sum total of all of the Group of Seven’s secondhand and subjective information produced just as much wisdom as 538’s painstaking research.

If you’ve seen Best Picture Nominee THE IMITATION GAME, you’re probably reaching the same conclusion that Alan Turing used to ( spoiler alert! ) win World War II for the Allies: You don’t need to speak German to break the Enigma Machine. You only need to know how to say two words – HEIL HITLER.

And when you’re trying to predict something like the Oscars, you don’t need to know have 25 years’ worth of data. You only need to know that Birdman was pretty crazy.

It’s worth pointing out that Nate Silver, the genius forecaster who started 538 in the wake of his successes in the 2012 election cycle, did not predict the Oscars this year. But, in years past, he has attempted to predict the results of the six major categories has fared no better than 75%.

Keep that 75% number in mind…

THE SUPPORTING CAST

Unfortunately for the guy with a gun to your head, there’s no way to break the tie because 538 only picked the six major categories.

But the Group of Seven used UNU to pick winners in 15 categories. When they picked their winners for categories like Best Visual Effects and Best Costume Design, the Unum group had even less information than they did Best Picture and Best Actress.

And yet…UNU got 11 out of 15 questions right.

Where 538 declined to even attempt to predict the majority of categories, UNU 73% or the categories correctly.

Somehow it didn’t matter that none of the seven members had seen any of the nominees for Best Documentary or Best Foreign Language film. Whatever the group had gleaned from their collective experiences allowed them to agree on CITIZEN FOUR and IDA using the UNU platform.

FADE OUT

If Nate Silver, the face of big-data forecasting, can do no better than 75% in picking the major categories, what does it say when a new A.I. tool allows seven uninformed people to pool their intelligence and accurately predict 73% of all categories correctly?

At Unanimous A.I., we believe that it shows that our UNU platform ( which is still in alpha testing among small groups of users ) is already tapping into the wisdom floating all around us.

What will happen when that Group of Seven becomes the Group of Seventy? Or seven thousand?

We can’t wait to find out.