Many times, at parties and in other conversations over the years, I have vociferously defended fellow journalists against charges of bias in their work. Particularly journalists working in the lowly field of print journalism, as opposed to TV.





But within those caveats, I've always maintained that the majority of professional print journalists, anyway, try very, very hard to get the story right. But recently, I had an experience that gave me a new perspective on the issue.





A few weeks ago, I attended the public launch of a company's product that had, until that point, been kept tightly under wraps. The product involved a breakthrough approach and new technology that had the potential of having a revolutionary impact on its industry, as well on consumers around the world. Unlike most of the journalists covering the event, I was not an expert on that particular industry. It wasn't my normal "beat." The reason I was there was because I'd been interviewing the company's CEO over the previous several months for a book project. But that also meant that while I wasn't an expert about the industry in general, I was in the odd position of knowing more about the company's "secret" product than any other journalist in the room.





It was an eye-opening experience. A lot of major news outlets and publications were represented at the press conference following the announcement. A few very general facts about the product had been released, but the reporters had only been introduced to details about it a half hour earlier. There was still a lot about how it worked, how it differed from other emerging products, and why the company felt so confident about its evolution and economic viability, that remained to be clarified.





But the reporters' questions weren't geared toward getting a better understanding of those points. They were narrowly focused on one or two aspects of the story. And from the questions that were being asked, I realized--because I had so much more information on the subject--that the reporters were missing a couple of really important pieces of understanding about the product and its use. And as the event progressed, I also realized that the questions that might have uncovered those pieces weren't being asked because the reporters already had a story angle in their heads and were focused only on getting the necessary data points to flesh out and back up what they already thought was the story.





There is always a tension, as a journalist, between asking open-ended questions that allow an interview subject to explain something and pressing or challenging them on accuracy or details. But if you think you already know the subject, or already have a story angle half-formed in your head, it's easy to overlook the first part.





The journalists at the press conference didn't have a bias as the term is normally used; that is, I didn't get the sense that they were inherently for or against the company or its product. They just appeared to think they knew the subject well enough, or had a set enough idea in their heads as to what this kind of story was about, that they pursued only the lines of questioning necessary to fill in the blanks of that presumed story line. As a result, they left the press conference with less knowledge and understanding than they otherwise might have had. And while nobody could have said the resulting stories were entirely wrong, they definitely suffered from that lapse. Especially, as might be expected, when it came to the predictions they made about the product's evolution or future.





In his new book, How We Decide, Jonah Lehrer cites a research study done by U.C. Berkeley professor Philip Tetlock. Tetlock questioned 284 people who made their living "commenting or offering advice on political and economic trends," asking them to make predictions about future events. Over the course of the study, Tetlock collected quantitative data on over 82,000 predictions, as well as information from follow-up interviews with the subjects about the thought processes they'd used to come to those predictions.





His findings were surprising. Most of Tetlock's questions about the future events were put in the form of specific, multiple choice questions, with three possible answers. But for all their expertise, the pundits' predictions turned out to be correct less than 33% of the time. Which meant, as Lehrer puts it, that a "dart-throwing chimp" would have had a higher rate of success. Tetlock also found that the least accurate predictions were made by the most famous experts in the group.





Why was that? According to Lehrer,





Not that everyone in the field is perfect, unbiased, or even a good reporter. And not that I haven't ever encountered an editor who really, really wanted a story to say "X" as opposed to "Y." I remember one editor who complained that a story I'd done about NASA test pilots didn't make them sound like the wild cowboys he imagined they were. (Unfortunately--or fortunately--the truth about test pilots is, they're not cowboys. They're precision engineers and very calculated risk-mitigators, hitting test cards with calm, methodical accuracy. The risk isn't in their attitude. It's in the inherent hazards of testing new technology under real conditions for the first time.)