I’m starting to come to the conclusion that of all the myths businesses and their leaders tell themselves, one of the most harmful is that they know where the expertise is. The more I learn about the results from crowdsourcing and open innovation efforts, the more I believe that the smart strategy is to expose your problems and challenges to as many people as possible and let them show you what they can do. Here’s my most recent example of the power of this approach.

The online startup Kaggle assembles a diverse group of people from around the world to work on tough problems submitted by organizations. The company runs data science competitions, where the goal is to arrive at a better prediction than the submitting organization’s starting ‘baseline’ prediction. Results from these contests are striking in a couple ways. For one thing, improvements over the baseline are usually substantial. In one case, Allstate submitted a dataset of vehicle characteristics and asked the Kaggle community to predict which of them would have later personal liability claims filed against them. The contest lasted approximately three months, and drew in more than 100 contestants. The winning prediction was more than 270% better than the insurance company’s baseline.

Another interesting fact is that the majority of Kaggle contests are won by people who are marginal to the domain of the challenge — who, for example, made the best prediction about hospital readmission rates despite having no experience in health care — and so would not have been consulted as part of any traditional search for solutions. In many cases, these demonstrably capable and successful data scientists acquired their expertise in new and decidedly digital ways.

Between February and September of 2012 Kaggle hosted two competitions sponsored by the Hewlett Foundation about computer grading of student essays. Improvements in this area are important because essays are better at capturing student learning than multiple choice questions, but much more expensive to grade when human raters are used. So automatic grading of written answers would both improve the quality of testing and lower its cost. Kaggle and Hewlett worked with many education experts to set up the competitions, and as they were preparing to launch some of these people were worried.

The first contest was to consist of two rounds. Eleven established educational testing companies would compete against each other in the first, with members of Kaggle’s community of data scientists invited to join in, individually or in teams, in the second. The experts were worried that the Kaggle crowd would simply not be competitive. After all, each of the testing companies had been working on automatic grading for some time, and had devoted substantial resources to the problem. Their hundreds of man years of accumulated experience and expertise seemed like an insurmountable advantage over a bunch of novices.

They needn’t have worried. Many of the ‘novices’ drawn to the challenge outperformed all of the testing companies in the essay competition, and came closer to the consensus score of the human graders than did any of the humans themselves. The surprises continued when Kaggle investigated who the top performers were. In both competitions, none of the top three finishers had any previous significant experience with either essay grading or natural language processing. And in the second competition, none of the top three finishers had any formal training in artificial intelligence beyond a free online course offered by Stanford AI faculty and open to anyone in the world who wanted to take it. And people all over the world did, and learned a lot from it. The top three individual finishers were from, respectively, America, Slovenia, and Singapore.

Businesses certainly know where a lot of the relevant expertise is in any situation, but results like those from Kaggle show me that they certainly don’t know where all of it is. As the open source software advocate Eric Raymond famously observed, with enough eyeballs all bugs are shallow. So why not expose your tough problems to as many eyeballs as possible?