At the University of Chicago in the 1970s and early '80s, the mercurial behavioral scientists Robin Hogarth and Hillel Einhorn performed groundbreaking research examining how ostensible experts often make suboptimal judgments. One study showed that expert pathologists had a very low ability to predict a patient’s survival time based on viewing a biopsy slide. Subsequent studies showed that aggregating the decisions of a group of people outperformed the predictions of individual experts. Hogarth and Einhorn’s research began to threaten the idea of expertise in general, prompting a colleague of theirs—now a colleague of mine—to warn them in jest that their work was dangerous because people need experts.

The actual accuracy of experts seems secondary to their purpose: to provide guidance. As a Midwestern teenager entering a phase of intensive cultural consumption in the 1990s, I sought out expert opinions in music, television, and film. This expertise came in the form of critics, who I believed had more access to media, earlier access to media, and more experience with the medium that they were covering. I turned to them for guidance, and I followed their critical advice. Plus, I couldn’t expect to find out about Mr. Show, Miller’s Crossing, and The Cenobitesmerely from my social network. Critics—with their extensive access, priority access, and comprehensive experience—provided this exclusive knowledge. Critics led, and I followed.

Crowdsourcing is a wonderful tool, but it fails in a very particular way, which is that any evaluation is swayed by the evaluations that have come before it.

WITHOUT DIPPING INTO TOO much armchair sociology, let me state the obvious: the Internet has dramatically changed the role of the cultural critic. Albums and movies “leak” far in advance of their due dates, entire libraries of music or television shows can be torrented and hoarded in a matter of hours, and as quickly as terabytes of .mp3s and .avis are transmitted, so too are all of our opinions on the media we are consuming. All of this means critics no longer have exclusivity, priority, or even, necessarily, expertise.

Expertise requires that, compared to the average person, one has a deeper understanding of a topic, a more well-researched opinion on the topic, and privileged information on the topic. The ability for anyone with a fast wireless connection to obtain an entire Lou Reed discography or the entire compendium of Get a Life episodes means that anyone can dig deep into a particular body of work. Access to carefully written blog posts about the true meaning of Inland Empire and the hidden samples used in Paul’s Boutique (not to mention access to Wikipedia) means that research is easy. Music and film piracy means that priority access has become a thing of the past.

Much has been written about how the Internet has granted the opportunity for celebrity status to the masses, making good on Andy Warhol’s promise of everyone getting their 15 minutes of fame, and justifying Time magazine’s decision to give you (me) their 2006 Person of the Year award. But Thomas de Zengotita wrote the most intelligent treatise on this phenomenon in his prescient, pre-YouTube article, “Attack of the Superzeroes: Why Washington, Einstein, and Madonna can't compete with you.” In it, de Zengotita noted, “Being famous isn't what it used to be.” This point continues to be exemplified by every Antoine Dodson who gets a TV show and every @Dadboner who gets a book deal. Less has been said about the transformative power of the Internet to turn us all into critics. To paraphrase de Zengotita, “Being a critic isn’t what it used to be.” Expertise no longer solely belongs to them. (Or, as Jay-Z recently put it, "I think reviews have lost a lot of their importance now because of the Internet.")

In many ways, though, the everyone’s-a-critic age has been fantastically useful. I have carefully curated my Internet experience to inform me of the most entertaining movies, TV shows, and songs—not to mention the most interesting scientific articles, the most energy-efficient vacuum cleaner, the most user-friendly Mac-compatible app for making a to-do-list, and the best place in Chicago to eat mirchi ka salan. I now count on my social network to enlighten me on albums and films that, as a Midwestern teenager, I feared I could only find in the most selective, coastal-elite magazines. And I count on the hive-mind to give me consumer reports far superior to Consumer Reports.

Indeed, crowdsourcing is a wonderful tool, but it still fails in a very particular way, which is that any evaluation is swayed by the evaluations that have come before it. A barbershop with a one-star rating on Yelp as its first review is subsequently more likely to accrue more negative reviews—and that same barbershop, were it to receive a four-star rating on Yelp as its first review, would be more likely to accrue more subsequent positive reviews. In a now-famous experiment, Matt Salganik, Peter Dodds, and Duncan Watts empirically demonstrated this effect in an artificial music market. They allowed people to download various songs and randomly assigned people to see the opinions of others who had downloaded these songs. Sometimes a particular song was shown to be well-liked by the masses, and in other versions of the study, that same song was shown to be disliked. Regardless of quality, people evaluated the songs they believed to be well-liked positively and the songs they believed to be disliked negatively.

In more recent work, Lev Muchnik, Sinan Aral, and Sean Taylor have documented this “social influence bias” on a news aggregation website where users can up-vote or down-vote comments posted on various articles. The experimenters initially up-voted some comments and down-voted others at random, and showed that an initial up-vote led to increased subsequent up-voting whereas an initial down-vote increased down-voting. Interestingly, people also “corrected” the down-voted comments by up-voting them more than baseline levels, but even this correction never spurred them to the level of positivity that artificially up-voted comments attained. This experiment again suggests that our evaluations of anything are inevitably influenced by others’ evaluations, and the increasingly public nature of opinion, objectivity is increasingly hard to find.

SO WHERE DOES THIS leave the cultural critic? The pervasiveness of the social influence bias suggests that even professional TV, music, and film reviewers are not immune to others’ evaluations influencing their reviews. Whereas in a pre-torrented world, reviewers had first access to film, TV, and music, now they must inevitably write their reviews after being exposed to the opinions of the masses who have already consumed, or at least previewed, the object of the review. When critics were first, their reviews initiated the social influence bias process, but now this process precedes them. Thus, whereas critics used to guide tastes, they often now function as mirrors of public opinion.

Cultural critics now have an opportunity to provide a real service by reviving objectivity, and giving people an informed opinion rooted in legitimate and honest contemplation. At the same time, it’s harder than ever for them to do that because of all the noise. (And harder for us to know what’s been biased by others.) It's a paradox, but it’s a paradox that valuable critics will work through. Everyone can be an expert now, but the best critics were always something different altogether.