================================================================

* Note – the man himself, Jonah Lehrer, comments in the comments section!

** Brutal Chad Orzel critique here.

*** Poll about whether I am indeed an idiot here.

**** Kinder, gentler Nicholas Carr critique here.

***** Beginning of 3-part post emphasizing that statistics are not scientific – the mysterious second stage of the critique alluded to below – begins here. Light editing and side notes to reader re: my statistical interlocutors added 6/7/11.

****** Galton’s original article here.

Thanks for the hullaballoo!

================================================================

Jonah Lehrer is not a neuroscientist. And I am not the first to admit it.

Responding to a mention of Lehrer in a June 7, 2010 Discovery Magazine review of Nicholas Carr’s The Shallows: What the Internet Is Doing to Our Brains, a reader said the same exact thing. It was – apparently – news to the story’s author:

Moseman, the author whose fix involved changing the wording about Lehrer (the article now identifies him as a “neuroscience author and blogger”), can be forgiven for his mistake. Our society badly wants help understanding the relationship of the mind to the brain, and we appear to have nominated Lehrer to be our man. He is young, he is handsome, and he is a certified smartypants. Fresh off a Rhodes scholarship he started writing about the brain and hasn’t stopped. He can’t be more than thirty, and already has two bestsellers on the topic: Proust Was a Neuroscientist and How We Decide. I am sure another is on its way.

And yet, as Katharine X pointed out back in 2010, he is not a neuroscientist.

Now I must confess something. Though I don’t pay much attention to the popular writing on neuroscience, I know Lehrer is one of the stars of the pop neuroscience world. Probably most people reading this do as well. And so I know that the title of this post – which is, in truth, merely the search phrase I typed curiously into Google last night after finding problems with an article of his – may seem provocative.

Merely stating that Lehrer is not a neuroscientist, which in my defense is as empirical a fact as ever there was (Lehrer has a bachelor’s degree in neuroscience from Columbia, but did not go further in his studies) may be seen as an attack: some sort of dig, or put-down, or attempt to call him out.

Given that Lehrer has never claimed to be anything but what he is, which is a journalist and public intellectual, it can’t be that Lehrer himself will be insulted. He, of all people, knows he is not a neuroscientist!

No, if I can pull a shrinky move here, it’s my guess that it will be his fans who will be upset. In saying that Jonah Lehrer is not a neuroscientist, I am messing with their fantasy. You see, the general public has wanted Lehrer to be a neuroscientist. And by this they haven’t meant they wanted him to have a PhD. I honestly don’t think they care about his degree. They want him – and anyone else they give the honorific – to understand the brain and then explain it. Pleasantly.

They want neuroscience to make as much sense as Jonah Lehrer’s writing does, and the brain to be as unthreatening as Jonah Lehrer makes it. He gives them hope that someday they’ll understand why they are bathing their neurons in all this alcohol and nicotine and caffeine and chocolate and Xanax and Prozac, and how they could better fiddle with whatever knobs are in that thing to bring on more and better happiness. And they want it to be like buying an iPhone. They want it to be intuitive and fun.

You might say that that utterly brilliant title – Proust Was a Neuroscientist – captured the public’s hope that, in the end, understanding the brain would be as easy as lying in bed and remembering that madeleine. Lehrer’s title said it, and we followed him down the rabbit hole: if Proust could be a neuroscientist then, by simple logic, the brain must be a piece of cake.

Phew!

And yet.

It surely has not escaped the notice of those of us who care deeply about the brain that in the final analysis Proust – that great introspectionist – may not, in fact, have actually been a neuroscientist. Sitting and thinking may not be the same as gazing at neurons under a microscope. And Proust’s folk psychology, which we all share – our concepts of memory and emotion and cognition and volition and action and so forth – may not in fact be up to the job of figuring out what makes us tick. And it has not further escaped us that if Proust was not a neuroscientist, then in turn perhaps Jonah Lehrer, for all his gorgeous writing and enthusiasm, is not a neuroscientist either, and that deprived in this way of all our guides we have perhaps been left all alone with this strange, even alien, organ in our heads.

This is, I think, the question that – if we are honest – haunts us all.

And so it was last night, as I lay on the living room couch in my mother’s house, after she and my wife and son had gone to sleep – the couch on which my father used to lie, and where he would read and think and snooze – with such thoughts hovering vaguely in my head as some potential, waiting to coalesce around some grain of sand in the world, that I came upon an essay published yesterday in the Wall Street Journal, in which Jonah Lehrer discussed with some bravado the implications of a recent scientific study of the wisdom of crowds for the American way of life.

Lehrer had a lot of interesting things to say – the main point of which was that a recent study seemed to show that a crowd of people makes better decisions if its members don’t talk to each other than if they do. But I had a problem – an unexpected problem. I didn’t believe it. I didn’t know what I didn’t believe, but whatever was going on in that article, it was unbelievable. My proverbial B.S. meter was going off like gangbusters, and I had no idea why.

Now as Malcom Gladwell discussed so memorably in the opening pages of Blink, I cannot reconstruct why I sensed something wrong. Nevertheless, to purposely mix metaphors, I felt annoyed by an intellectual tickle I could not scratch. So I decided to do a little detective work, lying there on my dad’s couch holding an Apple, before retiring to bed.

Looking back, the investigative impulse may have come from the fact that Lehrer never identified the article he was discussing, referring to it simply as “a new study by Swiss scientists.” I was a bit peeved that the Wall Street Journal did not feel compelled to credit the author of an original article, or give the title of the original article, or even state the scientific journal in which the article was published. Perhaps the scientist in me – who well knows how long it takes to produce these damn papers – wanted to give that faceless person in Switzerland his or her due.

Or perhaps it was that Lehrer’s opening paragraph had left me in hysterics: “America depends upon the wisdom of crowds. When voting, we rely on the masses to pick the best politicians. When investing in stocks, we assume that, over time, people will gravitate toward the best companies. Even our culture is increasingly driven by the collective: Just look at ‘American Idol.’ “

I had almost fallen off the couch. I trusted that Lehrer knew his examples were all deliciously insulting, coming as they were one after another like rapid-fire one-liners in a stand-up routine. I loved the in-your-face subversion and reckless disregard he gave his safety in squirreling these digs into the conservative Journal in what surely was a sort of intellectual hi-hinks, or prank, or punk’ting, or whatever it is we are meant to call practical jokes these days.

Bottom line: he was pwning the Journal. For of course, our nation’s recent selection of Bush, twice – once? – and our support for the Iraq caper and the housing bubble and, worst of all, Lee DeWyze, have all pointed to a single conclusion: As a nation we may be many things, but wise surely is not one of them. We are an empire in decline, despite the repeated warnings of those who would save us. Lehrer knows this better than anyone, surely, having studied abroad. I can only imagine the crap he took on our behalf from those Oxford wits

For anyone who takes umbrage at this portrait of our national discretion, I have a visual retort:

Listen, I am as mesmerized by Snooki as the next American. I once saw her interview J-Wow on the red carpet on live TV – I think it was a Dairy Queen opening – squeeze J-Wow’s implants and then say, thoughtfully, “looking good tonight.” I could not turn the channel.

I do not chalk this up to wisdom.

But if I had to say what the clincher was – what sent me off looking for what was making me blink, it had to be two stray words: for instance.

Here’s the quote where I found them. Lehrer is early into his article and describing the study upon whose data, at the piece’s end, he would base his sociological observations and policy recommendations.

The experiment was straightforward. The researchers gathered 144 Swiss college students, sat them in isolated cubicles, and then asked them to answer various questions, such as the number of new immigrants living in Zurich. In many instances, the crowd proved correct. When asked about those immigrants, for instance, the median guess of the students was 10,000. The answer was 10,067.

And there it was. Two words that triggered my blink moment. For instance.

Now to understand why these words were such a trigger, you must know that for the past seven years I’ve wrestled with fMRI-generated matrices and endless columns of subjective psychology data, all of which I’ve analyzed up the wazoo. I have waded through waaaaaaaaaaaaaaaaay too much data – or shitty fucking data, as we call it in the biz – to take Lehrer’s sentence at face value. Which is to say, I would never believe that a guess – by 144 people, no less – less than 1% away from a correct answer could possibly be a “for instance.”

To say this sentence – “for instance, the median guess of the students was 10,000. The answer was 10,067” – was, like hearing him say “New York City public transportation mixes all classes into a single melting pot. For instance, Kim Kardashian sat next to me on the subway yesterday.”

That’s wouldn’t be a “for instance.” That would be a “what the f***.”

But that wasn’t the only funny thing about that 10,000. I’ve run way too many SPSS statistical analyses to think that an average guess by 144 people could be a round number – eg, would end in 0. Let alone 0000! And so part of my blink was the thought “10,000 – jeez, what kind of miraculous average is that?”

Now if you have already seen my error – for Lehrer’s conceptual mistake is buried in his sentence, for all the world to see – bear with me. It was late at night, and I had been pulled in by the way in which he had framed his story, and so I was slow to pick up one crucial word. And if you haven’t seen my error yet, join the club – and wait a moment more.

With all of this bad statistical juju running through my head, the part of me that knew Jonah Lehrer is not a neuroscientist – and therefore might have made a mistake in reading this article and then waxed philosophical about his error – blinked. I blinked, and then I opened up a new tab in my browser and went searching for the original article to see what on earth was wrong.

The problem was, the original article wasn’t named in the WSJ piece – a breach of scientific etiquette I hope the editorial staff fixes in the future. Luckily a commenter on Lehrer’s piece, also noting the Journal’s oversight, had done some detective work, and provided the link. Here it is: Lorenz et al (2011) How social influence can undermine the wisdom of crowd effect. PNAS.

I opened the paper and did what I always do – skip the intro, go straight to the tables and figures, and then to the methods. If you ever read a science paper, you should do the same thing yourself. Reading intros and conclusions first is for suckers – they can say anything the author wants, and reading them allows the author’s “spin” – as we scientists call intros and conclusions – to frame your analysis of the data. In science, the only thing that matters is the methods and the data, because it’s where the author can’t hide behind a spun story.

And on page three, there it was. The problem. In around sixty seconds, no more than ninety, I found Lehrer’s 10,000 and then his 10,067 and immediately saw what he was doing and why it was making me blink. It was in the table just below – Table 1 – the only table I’ll talk about in this post. The red circle around the number 10,000 – the 10,000 that Lehrer used as his for instance – is drawn by me, because it’s the heart of Lehrer’s and the Journal’s problem. Or rather I should say cherry. And it is a huge, huge problem cherry.

Now listen. In the next few paragraphs I am going to start with this 10,000 and use it to explain why Jonah Lehrer should never, in a million years, have used this number as his “for instance” – and further why the table as a whole implies the very opposite of the point he made in his article. Which in turn renders his patriotic conclusion invalid. In a future post – coming in a few days – I will begin to discuss deeper conceptual problems with Lehrer’s article involving something fancy sounding – the metaphysics of central tendencies. But in this post I am going to stick to examining just this one error, because of its importance to our thinking about the whole field, and the whole trade, of pop neuroscience.

I’ll try to make it fun. Think of the next few paragraphs as an episode of CSI, except that the crime is the misreading of a single Table in a science paper, and David Caruso is – sorry Caruso – me. Oh, and nothing dies.

First, take a good long look at Table 1. Seriously. I’ll reprint it to make it easier.

Now listen: most non-scientists see a table like this and freak out. They take around 3 seconds to decide they can’t understand it, get scared of feeling stupid in the face of all those numbers, and so they calm down by skipping over it and back to the words. Scientists have a huge advantage over their non-scientist friends on this front: they don’t expect to understand this table in three seconds. Or even three minutes. They look at it the way a piano player might look at a Bach score, or an art lover might look at the Mona Lisa. They look at it for a good long time, lingering with their eyes over the columns of numbers, and getting a visceral feel for it. The table becomes a living thing for them, with a personality. And only after they have a little bit of a vibe from the table do they start trying to understand all the column- and row-headings. Do the same. Allow the numbers to form some vague impressions in your mind. Do they have decimal endings? Are they all even or odd? Are they short or long? Is there lots of variation between them?

This vibe in hand, let’s begin.

Look at the red circle. That 10,000, there in the last column, third row – it’s just crying out to be compared to the “true value” in the second column, third row – the 10,067. Let those two numbers resonate with one another, as they surely did in Lehrer’s mind. See how similar they are? To quantify this, the authors calculated the percent similarity (the guess minus the true answer divided by the true answer – or 67 divided by 10,000. The percent difference between them is 0.7%, as indicated by the 10,000(-.7%).

These numbers – the ones Lehrer cited in his article – will be your anchor points in the table. Always go back to them if you get confused. In an English sentence, a scientist would say that this line reads: “the 144 Swiss college students’ median guess as to the number of new immigrants to Zurich was 10,000, a difference of only 0.7% from the true value of 10,067.”

But instead of considering this value an “instance”, as Lehrer did, you should notice that this was by far the best guess produced by “the crowd” in response to the six questions they were asked. The fact that Lehrer chose the third value in the column, rather than the first, is also a bit of a giveaway that there was a problem with the first two – at least if you wanted to make a wise-crowd point.

Running down the right-hand column labeled “median,” the other guesses by that crowd of 144 college students miss their targets by a whopping 29.3%, 59.1%, 0.7%, 14.1%, 60.9% and 56.9% respectively, for an average error of 38.5% away from the correct answer. That’s a big error. That ain’t no 0.7%.

So here’s the first question that we want to ask the science editor of the Wall Street Journal: why did they let Lehrer report only one value – the 0.7% – and say “for instance” instead of reporting the group average – the 38.5%? But we don’t have to ask. The answer is obvious. What he did is an example of a fallacy in science called cherry picking. Which is to say, that 0.7% made the point about the wisdom of crowds in spades. It was almost as good as Galton’s initial, amazing Wisdom of Crowds finding. Which is to say, it made a good story. And if I didn’t understand statistics, I would have acted like a kid in a candy shop and cherry-picked that number too.

We’re getting close to my error now. Because as you get into the groove of analyzing this table, something else weird and blinky should jump out at you – the same thing that jumped out at me when I saw the 10,000.

All of the median numbers end in 0, and none have decimal points.

Wait. Isn’t this article on the wisdom of crowds? The median “guesses” to the six questions were 130, 300, 10,000, 170, 250, and 4,000 respectively. The correct answers, meanwhile, were 184, 734, 10,067, 198, 639, and 9,272.

Okay. Stop a second. If you had to decide which of those two sets of six numbers represented a group average guess – or as Lehrer put it in the article, “in many instances, the crowd proved correct. When asked about those immigrants, for instance, the median guess of the students was 10,000. The answer was 10,067” – which set would you say a “crowd” would come up with?

Obviously your answer would not be the numbers ending in the 0’s. What are the odds that 144 people are going to guess 144 different numbers and the average of those numbers will end in 0? One in ten – and only if the original number isn’t odd, in which case there should be a decimal. But of course the numbers are “funnier” than that – there’s that 10,000 and that 4,000 and that 300 and that typical number 250. These numbers shouldn’t be popping up for a 144 person average.

It was as I mulled this over that my eye went up to the word “median” at the top of the column, and I realized with a “duh” what was happening. And if you have been bothered, as you read along, by my apparent confusion of the difference between mean and median, this was the moment where I realized my mistake. I had overlooked the word median, which seemed to be used incorrectly, and gone with the gist of Lehrer’s implication that he was talking about a group average – as when he said “the crowd proved correct…” – as an article on the wisdom of crowds would lead one to assume, and as the article in question overtly states is the most common measure of crowd wisdom. It had never occurred to me – as it never occurred to he study’s authors, who identified the geometric mean as the outcome variable of interest – that Lehrer would be talking about an actual group median. But now that I realized he really meant median, and that maybe he didn’t know what median meant. Because median guesses are not guesses by a crowd, as Lehrer states. They are guesses by a single person. They are guesses by the median person in the group – number 72 (or 72 and 73 in even samples) out of 144. Which is to say, they have nothing to do with the point of the Journal’s article! [Note: this is the section that drove all my critics crazy. I had implied, by not overtly said, that I was saving my critique of central tendency junkies for a group of later posts; they had little way of knowing this. My bad.]

In statistics, the medoid is defined as the person with the middle value in a group – half the group is above them, and half below; medians are essentially equivalent to the medoid. Which means that, for all intents and purposes, the median person is one guy. Not a crowd! A single person! Which is why the numbers are so pretty – they were chosen by individuals. [Again, people went nuts over this – see this series of posts for the metaphysics behind this point; it is NOT (and cannot be) empirically wrong, as statistics – like mathematics – are not a branch empiricism. They are a branch of applied metaphysics.]

Feeling silly, I looked at the table more closely, and saw that the words “wisdom of crowd aggregation” at the top of the table explicitly excluded the median category. This was a bad sign for median – it meant the authors probably just included it for kicks, and not as a measure of crowd wisdom – which made sense, given it wasn’t a crowd answer.

And then I read the methods section, which confirmed my reading in spades. There I found this sentence: “this confirms that the geometric mean…. is an accurate measure of the wisdom of crowds for our data.” [Lorenz et al, p.3, top left paragraph]. Only later, in the discussion, do they mention that medians are close in value to the geometric mean, but fail to explain why they did not use it. One has one’s suspicions……

Things were looking bad – really bad – for Lehrer’s characterization of the paper. Not only had he cherry picked a single value out of a column of values, but he had chosen a column of values the authors did not use as their dependent measure; the authors explicitly say that it is the geometric mean that should be used to judge crowd wisdom. Not the median! In his WSJ article Lehrer is using a number generated by one person, to one question, instead of by 144 people to 6 questions.

[Again, critics went nuts over this, saying that the median is a group statistic. First, this is an article of faith – statistics are not an empirical science – and second, most readers of the WSJ would not understand this meta concept, and would recognize median values for what they are – analogs of a representative democracy, in which each value along the x-axis turns out to have one vote, regardless of what that vote is for, and only the the one or two middle members of the group go on to represent it – with all information about outliers thereby, purposely, lost. This somewhat complex critique is explored in a later group of posts.]

Furthermore, the authors explicitly tell him that he should be looking at another column of numbers! As my grandfather used to say, Oy! It’s as though Lehrer either didn’t read, or didn’t understand, the table and the methods section. Unless, of course, he purposely mischaracterized it – which I sincerely doubt.

Now of course Lehrer’s mistake was a relief for me, because it explained why I was up surfing the internet looking for Swiss college students doing strange things in cubicles when I could be sleeping. It meant my blinky hunch that something was wrong with Lehrer’s article was correct.

The bad news was I was hooked. Now I wanted to actually understand the paper that Lehrer had misread, to see if despite his error it nevertheless said what he claimed it did. I didn’t know yet whether Lehrer had made a small technical mistake or a big conceptual one.

In doing this, I should make another comment about the psychology of reading science papers. If I were the kind of person who trusted what authors say about their own data, I would never have been in this position in the first place. The introduction to the paper, which I did eventually read, more or less supports Lehrer’s interpretation. But as a scientist – having been on the inside of the “spinning wars” that roil the field – I was as cynical as anyone. I was completely uninterested in what the authors said their data said. I kept looking because I wanted to know if it really said what they said it said.

So I kept looking at the data Lehrer should have been looking at. In the figure below, I’ve circled that data – the exact data the methods section said we should be looking at. The geometric mean (circled in blue, below) – whatever in God’s name that is. [Chad Orzel got upset over this throwaway line’s effort to spare my readers a conversation about the philosophy of tail reduction that, as implied, I was saving for a longer discussion in later posts. Sorry Chad!]

When I looked at that middle column of numbers, labeled “geometric mean,” I saw that those numbers look horrible. Of the six guesses, the closest to the true value – row 4 – is 11.9% off, not that gorgeous looking 0.7%, and three of them – rows 2,5,6 – are over 50% off. And then I started thinking doing what I thought that I might do – I started doubting the study’s own authors’ portrayal of their results. This crowd of students, I thought to myself, is not looking too wise after all.

And then I noticed that very first column of numbers (green circle, below). The one under the words “arithmetric mean,” which is just a fancy-pants way of saying “the number that the students actually guessed.” This is the number that Galton originally reported, according to Wikipedia, when introducing the world to the Wisdom of the Crowd phenomenon. That is, this is the number that real people literally wrote down when guessing the answer to the question.

And if you thought the geometric mean was bad, the arithmetic mean – unlike in Galton’s study – is horrrrrrrrrrrrrendous. Here the crowd was hundreds of percents – yes, hundreds of percents – off the mark. They were less than 100% off in response to only one out of the six questions! At their worst – to take a single value, as Lehrer wrongly did with the 0.7% – the 144 Swiss students, as a true crowd (unlike the 0.7%), guessed that there had been 135,051 assaults in 2006 in Switzerland – in fact there had been 9,272 – an error of 1,356%.

Hell, Snooki could have done better than that.

I won’t trouble you with any more of the methods section, save to say the authors tie themselves in knots explaining why the actual guesses people make – the numbers they write down and actually mean – are not a good estimate of the wisdom of crowds, while the geometric mean – something confusing involving outlier-killing logarithms – is much better. Suffice it to say that none of the actual students writing down actual answers would have recognized their distorted numbers after such manipulation, so that whatever wisdom such a crowd might have, none of its members would know it.

Just put that aside. We’re going back to Lehrer’s article. Consider this early quote:

Here’s the bad news: The wisdom of crowds turns out to be an incredibly fragile phenomenon. It doesn’t take much for the smart group to become a dumb herd. Worse, a new study by Swiss scientists suggests that the interconnectedness of modern life might be making it even harder to benefit from our collective intelligence.

Wait. What?

Having just analyzed the original article, you should be left wondering what Lehrer means when he talks about “the smart group.” Forget the dumb herd part – I don’t even know if there are problems there; the article came out yesterday and I am on vacation and squeezing this post in between family outings. I’ll try to revise later.

But the problems with Lehrer’s premise are enough for me. The authors of this article collected raw data that make it quite clear that in this particular paper there is no smart group.

There is no wise crowd! Those Swiss students blew it. Blew it! Every single question, the arithmetic mean, and the geometric mean, and the median – save for a single case that Lehrer cherry-picked – was from a human standpoint wrong, wrong, wrong, wrong, wrong and wrong. The end.

So what is Lehrer talking about? He is talking about that single 0.7% single-person data point: one person, selected after giving their answer, got close to the correct answer on one of six questions. One person guessed 10,000 when the answer was 10,067. That’s one hit out of 144 x 6 = 864 attempts. That seems about right to me, from a common sense perspective. Which is to say, that is a shitty batting average. And so the actual numbers produced by actual people give the lie to all three of his examples at the start of the piece, about elections and stocks and American Idol. They explain, if anything, why stock markets get things wrong, America elects bad presidents, and the winner of singing competitions is never the best singer – with or without social media and its supposed herds.

Which brings us back to the central concern of this piece: what does it mean to be a neuroscientist?

Here’s my deep point. I don’t care about straight psychology – straight psychology is, not to pull punches, over. I care about neuroscience. And Lehrer was not trying to be a neuroscientist in this article. This was a straight-up psychology article. But modern neuroscience, his chosen wheelhouse – particularly the subfields of behavioral and affective and cognitive and social neuroscience – is radically more complex than straight psychology. Its experiments are like this study combined with brain imaging. Similar and more serious mistakes, by scientists and their interpreters, might be made concerning the psychological interpretation of the activity in various parts of the brain.

In short, neuroscience is really, really complicated. So why are we letting our fantasy that understanding the brain might be a piece of cake lead us to expecting any one person – even a Rhodes Scholar – to figure it out?

One person can’t do it.

I think we’re at the end of anyone seriously thinking they, alone, can understand the brain. None of us is a neuroscientist. Not Jonah Lehrer, not me, not even Antonio Damasio. It’s going to have to be a team effort from here on out. Nobody will ever understand the whole brain – conceptually, maybe, but not at the neuroscience level – the level of its physical structure. That sucker has 100,000,000,000 neurons, each connected to 10,000 others, each firing around 100 times a second. That thing is exponentially harder to understand than any other phenomenon in science – some people say the universe.

And that’s where we, the consumers of pop neuroscience, need to get real. Really, really, really real. I admitted Jonah Lehrer is not a neuroscientist at the start of this piece, and I’m sure he admits it, but now we all have to admit it. We need to make him – with all his ambition and intelligence and thoughtfulness – not be a neuroscientist on our behalf. No single person can be a neuroscientist ever again – not the way the public wants them to – not the way a dermatologist can still be a dermatologist, or a carpenter a carpenter. Too many smart kids, in too many labs, with too many bright ideas, and too much government and industry financing are discovering too many new ideas for any one of them to keep track of all the facts of the brain. There is simply waaaaaaay too much to know. We’re going to have to do this together.

One of the hardest parts of scientific training is the surrendering of hope – in particular, the hope that one person can understand it all. All scientists go through this, more or less. There is simply too much information, too much technical knowledge, too many specialized concerns, for anybody to understand all of neuroscience, or psychology. I cannot review papers in cellular neuroscience, for example, and will not review technically demanding papers in neuroimaging. And I know jack – officially, at least – about the academic field I care most about, and feel would be most useful to an improved understanding of the brain: philosophy. I have had to surrender my hope, as have my colleagues; now the general public does as well. We need to split our expectations in two. We should all hope to understand the metaphysics of neuroscience – metaneuroscience, if you will. And we should each seek to understand some, but not all, of the facts of the brain, and be prepared to explain them to one another.

Which is to say, we need not to get spun. Which is what surely happened to Lehrer.

Newspapers like the Journal need to examine their assumptions. Yes, they can send a reporter to cover the White House and fight through the spin. But who guaranteed them they can do the same with neuroscience? I honestly don’t think they can. And yet the solution can’t be to do away with solo practitioners like Lehrer. It has to be the opposite: to make many, many more of them and link them together. Like neurons in the brain.

To close, my neurons just reminded me of some old Latin line about the government – ah, here it is on Google: quis costodiet ipsos custodes? Who will guard the guards?

Well, consumers of neuroscience need to ask: who will read the readers? They can’t do it all by themselves. I think the answer must be all of us – and until then, more of us. Otherwise there are just too many mistakes out there waiting to be made. I have no doubt that even in this post I have made my share of them. Please let me know.

After all, I’m not a neuroscientist too.

Follow @peterfreed

