In 1906, while at a country fair in England, the great statistician Francis Galton observed a very curious phenomena. While looking through the data of a “guess the weight” competition of a large Ox, he realised that the average guess (1197lb) of the 800 participants was extremely close to the actual weight (1198lb) of the Ox.

This is the first noted account of someone noticing the correlation between the averaged data of large samples of “guesses” and their extremely consistent accuracy.

In summary, the accuracy of the group is far greater than even the expert individual.

In the case of the Ox weighing competition, some of the individual guesses came from Ox weighing professionals — who were far more inaccurate than the crowd of largely amateur fairgoers.

Today we know this ‘correlation’ to be fact, and we generally call it “The Wisdom of the Crowd” or “Collective Intelligence.” However despite in-depth analysis of the topic; applications of the technique, until recently, have been scarce and largely limited to the academic world.

This is about to change significantly.

Why Collective Intelligence is the future of data & predictive technologies

Given the recent scandals involving data-retention, and the associated applications of that data most notably in the Facebook and Cambridge Analytica scandal, people are rightfully becoming very concerned about the safety of their personal information.

And with good reason…

In 2010 human behaviour was concluded to be 93% predictable in a study done by Professor of Physics Albert-László Barabási and his team.

It has also been the instigating thought in the rise of distributed and decentralised technologies that is powering the development of crypto and web3 development.

But this is not about data breaches, this is about the merits and applications of collective intelligence.

The reason for the cross-over is that collective intelligence might hold the key to solving our data crisis. That is because data itself is not the enemy, it is the data theft, analysis and ultimately sale of that personal information which is the concern.

But what if we could get people to contribute data willingly? and better yet, have them rewarded for staking that data based on their accuracy?

Prediction Markets for Determining Future Events

So while Francis Galton is widely credited with the concept of ‘collective intelligence’ — the application and economic theory behind prediction markets is credited to Ludwig von Mises in his “Economic Calculation in the Socialist Commonwealth” and Friedrich Hayek’s 1945 article “The Use of Knowledge in Society.”

Essentially prediction markets are exchanges trading in the outcome of future events. Collective intelligence is a proven method of being a highly effective means of aggregating and uncovering hidden information and insight on a consistent basis.

For instance, when the task at hand is forecasting, prediction markets have proven particularly efficient at “consolidating the informed guesswork of many into hard probability”, as The Economist have previously written.

Developed as a research project through the University of Iowa in the late 80’s, the Iowa Electronic Market (IEM), a digital reincarnation of the old election prediction market, was one of the first examples of an online prediction market. A study in 2008 of IEM’s success rate covering 5 presidential elections, found that predictions were more accurate than polling 74% of the time.

Another example was the Intelligence Advanced Research Projects Activity (IARPA) project which took a group of 3,000 ordinary citizens armed with nothing more than an internet connection and aggregation software. They found they were able to make better forecasts of global events than CIA analysts with special access to private information.

The study found hidden experts, known as super forecasters, who would not normally have been identified. These super forecasters demonstrated an ability to outperform CIA analysts by 30%.

The research literature on Collective Intelligence is collected together in the peer reviewed The Journal of Prediction Markets, edited by Leighton Vaughan Williams and published by the University of Buckingham Press.

MIT is so fascinated by the process that it has its own research centre entirely focused on the topic, the ‘MIT Centre for Collective Intelligence’.

So how does it work exactly?

Surely you can’t just throw a handful of people in a room, get them to speculate on a topic they know nothing about, average the answers, and come up with an accurate prediction?

No you can’t, but the actual method is not as far away from this as you think.

As summarised in the best-selling book The Wisdom of Crowds, four conditions have to prevail in order to extract consistently accurate results.

Four criteria are important in making this an effective tool.

Independence: The various guesses have to be independent of one another. That is, each person must guess without the knowledge of what other people have guessed. Diversity: It is important to have a diverse set of guesses. In the guess the weight of the Ox example, the people making the guesses ranged from farmers, butchers, livestock experts, housewives etc. That is, we all have cognitive biases in our decision making, so you can quell the influence of similar prejudices by have more varied test samples. Decentralisation: The people making the guesses should be able to draw on their private, local knowledge. Aggregation: There must be some way of aggregating the guesses into a single collective guess. In the guess the weight of the Ox example, this was done by taking the average guess. This is a common method, but others may also be used.

What happens if these conditions are perverted?

This short answer is groupthink, and groupthink is the antithesis of collective intelligence.

Groupthink happens within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Quite often this is because ideas prevail unchallenged as those that disagree often stay quiet to avoid conflict.

Many of these pitfalls can be avoided if the answers are independent, diverse and aggregated. That is if everyone is able to go through their own decision making process, rather than attempting to coerce everyone to come to one conclusion as a representation of the group.

These situations of “one answer to a group” are so familiar to schools, workplaces and even governments that it shows how flawed our societal decision making really is, on all fundamental levels.

Most of the initial research on groupthink was conducted by Irving Janis, a research psychologist from Yale University. One of the most historic examples of groupthink that he touches on in his work is The Bay of Pigs (the failed invasion of Castro’s Cuba in 1961).

President Kennedy wanted to overthrow Fidel Castro and his intelligence network knew it. This affected the thought-process of the group, who did not act and think as clearly or as intelligently as they could have. Instead, they made illogical conclusions, and then pressed forward without an openness to new information. They were steadfast in their decision making, despite how flimsy the foundations of the plan were.

By directly involving himself in the decision making, Kennedy caused his subordinates to come up with a plan that pleased him rather than one that made the most strategic sense. The result, as history shows us, was a disaster that inadvertently lead to the Cuban Missile Crisis.

Melbourne State Library — Reading Room

Why does this work?

Human nature dictates that we are more involved in a process that has an outcomes that will effect us directly. If follows then that if someone has a financial ‘stake’, with a potential reward for being correct, they are more likely to answer questions with more diligence and preperation.

Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey’s and because of the individual and decentralised nature of collective intelligence studies it stands to reason that the two study fields will have good margins of intersection.

An MIT study showed that the Bayesian Truth Serum “reduced errors by 21.3% compared to simple majority votes.”

Another study from the University of Illinois titled, A Model of Financial Incentive Effects in Decision Making, made these summarising comments.

Consistent with the predictions of the model, participants offered performance-contingent incentives took longer to choose, examined more information, had higher levels of negative affect, and used decision strategies that led to more accurate choices than participants offered randomly distributed incentives.

Decentralised Predictive Markets

Augur and Gnosis are two examples of collective intelligence on the blockchain. They act as prediction-market platforms that reward a user for correctly predicting future events.

If a prediction can help inform major business decisions, it’s worth considerable time and money to avoid potentially negative and foreseeable outcomes. If you can predict the future, you can plan for it, and even influence it.

Gnosis is working on a framework for hosting and interfacing with decentralised prediction markets. You would use the Gnosis framework to create a market and link it to an event. Through this market, users may purchase something akin to a share in the event.

These markets allow people to buy and sell shares in the outcome of an event.

Where the current market price of a share is the current estimate of an event actually occurring.

In researching to explain the differences between the two platforms I found this helpful comment on a Augur forum that summarises the differences more succinctly than I could. So I will paste that comment here in totality;

“The main difference is that Gnosis uses a centralised oracle whereas Augur has developed a decentralised oracle (The reporting system). A centralised oracle has the benefits of being faster to resolution (likely within hours to a day) and much simpler. Centralised oracles however are not censorship resistant, and can be potentially shut down or mismanaged by the trusted party. A decentralized oracle has the benefit of being censorship resistant, meaning it is not possible for an entity to shut the system down or a malicious trusted party to mismanage the platform (because the system is trust-less). Resolution takes a minimum of 7 days in Augur’s particular implementation however, and could take even longer (A maximum of 20 weeks + a 2 month forking period if things escalate to that point).”

Use-cases and future implications of the Collective Intelligence

Right now the applications of collective intelligence are esoteric, with the majority of studies and papers conducted and written by academic institutions such as MIT. However with more information and working practical models collective intelligence could become a ubiquitous tool to help govern decision making and planning.

As an example Civic just announced funding from Consensys to build a self-governing newsroom using Blockchain Technology and Cryptoeconomics To Create an Open Marketplace for Journalists and Citizens.

CanYa too has spoken about the technology as a device to help with decision making protocols in the CanYa DAO.

To reiterate an earlier point, we all have cognitive biases in our decision making, so if we can quell the influence of similar prejudices by having more varied decision making groups, then I reason that we should promote and develop this technology as a matter of importance. True democracy relies on it.