Bob Dylan was a Neuroscientist?


Bob Dylan has done it again. In the past he’s brought low corrupt cops, apartheid, and the Vietnam war. But last month he brought down America’s favorite pop- neuroscience writer, Jonah Lehrer. A Dylan buff, Michael Moynihan, read Dylan quotes in Lehrer’s bestselling Imagine and had a “blink” moment: something about them seemed off base to him. After some brilliantly documented sleuthing, Moynihan eventually got Lehrer to admit he’d mashed up, mixed up, or just flat made many of the quotes up. In short order Lehrer resigned his position as staff writer for the New Yorker, and America once again got to marvel at the peculiarly self-destructive tendencies of one of our most talented young writers.

If you’re having your own “blink” moment about the whole affair, trust your gut. What’s funny is this: a science writer, who summarizes hundreds of scientific studies and theories a year for us, each of which is mind-bendingly complex, was laid low by Bob Dylan. But shouldn’t he have been lain low by bad science? Shouldn’t he have been caught incorrectly mashing up the work of Eric Kandel or Antonio Damasio?

The easy answer – and the one that most of us seem to implicitly assume – is that there is no bad science in Lehrer’s work to find. The storyline here is that his science was good, his humanities were rotten. But a more troubling possibility is that Lehrer’s dissembling didn’t stop at Dylan. The reason we haven’t found his science fudges is because we’re not as good at catching them. If that’s true, the affair gives us a chance to think about how readers of pop science are supposed to know when the science is being misrepresented.

Last May I had my own Moynihan moment with Lehrer. An article he wrote in the Wall Street Journal about the Wisdom of Crowds (not a musician) had a number in it (not a quote) that seemed funny to me, like Moynihan I tracked down the original source paper (which, as with Moynihan, Lehrer did not cite or link to). Like Moynihan, I found that Lehrer wasn’t describing the paper correctly; he was fudging and misusing the raw data in order to support a claim he wanted to make. In my case, the authors said that their results should come from column B in the table; Lehrer used column C. And the authors had run their experiment six times (to improve their odds of avoiding a fluke result); Lehrer reported just one of these runs – the one that happened to be a fluke. And finally, where the authors results showed a very middling and unimpressive result, Lehrer’s single value was an outlier, that made the Wisdom of the Crowd seem much, much better than it actually was.

The actual quote was the kind of thing that would slip by most readers. It read, in full, “The experiment was straightforward. The researchers gathered 144 Swiss college students, sat them in isolated cubicles, and then asked them to answer various questions, such as the number of new immigrants living in Zurich. In many instances, the crowd proved correct. When asked about those immigrants, for instance, the median guess of the students was 10,000. The answer was 10,067.” The problem was his “for example” was not an example. It was the one exception to an otherwise inaccurate crowd.

Like Moynihan I followed up. I published the story on neuroself.com, the story blew up on Twitter and got thousands of hits, and Lehrer, distressed, wrote in. He said that that he “would have been sure to mention that the one example I cited was the best example of the effect. But, of course, that wasn’t the point of my column.” But of course that doesn’t matter. The point of his column – whatever it is – doesn’t entitle him to present a single wrong statistic as an “example” of a correct group statistic. His readers were left with an incorrect impression of the study – one that supported the “point” of his column. In my book, this kind of fudging is just as bad as the

Dylan fudging. Lehrer described the study in a way that no scientific reviewer would endorse as being correct. But where was the furor then? And where was the furor when subsequent work by the scientists Tim Requarth and Meehan Christ for themillions.com found scientific fudges in Imagine? Why weren’t people upset earlier that whether it was music or numbers, Lehrer is prone to fiddling with the data in order to support his conclusions?

I bring this up not to pile on Lehrer, but instead to draw attention away from him. As I write other reporters are doubtless examining Lehrer’s work for signs of conventional humanities plagiarism and dissembling, and further examples may be forthcoming. What would be far more useful, however, would be for his scientific work to be reviewed. After all, that’s what the general public has hired him for: not to explain Bob Dylan, but the brain. That we aren’t equipped for it doesn’t mean it isn’t important. Just as the Bible used to be written in Latin, and the lay public was reliant on the clergy to translate it into English, the modern curious layperson is reliant on science writers to translate scientific writing into terms he can understand. We need a system that is capable of catching misrepresentation.

I can think of there solutions to the problem.

The first is for everyone to lay off of Lehrer, who was under enormous internal and external pressure to produce polymathic brilliance at a regular clip, and look at themselves. What makes us think that anyone is capable of synthesizing the amount of material that Lehrer seems to synthesize? Shouldn’t we expect less of our authors, or longer periods between their books, or ask for a larger contingent of pop science writers, each of whom is given a smaller niche to develop expertise in?

Second, shouldn’t science writers be expected to copy mainstream scientists, and present extensive citations and explanations of their conclusions? Before the internet, when all information was printed, the publishing industry dropped the practice of footnoting popular science in order to save paper and keep down costs. That’s not an issue now. All major publications should from hereon out have a “sources” or “supplementary materials” page – just as the major science journals, like Nature, do – on which reporters give background material. Holding them to this standard will ensure that they spell out, to themselves, their inner logic.

Finally, while it is ridiculous to expect humanities majors at major publishing houses to fact- check scientific claims in popular books, it is entirely reasonable to send drafts out to experts in the field to vet. Every practicing scientist in the world is accustomed to vetting the work of other scientists – it’s called peer review – and spends significant time each year to reviewing, gratis, the academic work of other scientists. Paying them a reasonable fee for their time, or asking them to perform the job for public credit, would bring science journalism under the supervision of the field that it purports to represent. A clearinghouse for assigning books to willing reviewers could be established by the major publishing houses, and scientific books and articles could be given some stamp of approval – equivalent to “Organic” or “Non-GMO” – signifying that they had passed muster.

The benefit to the general public would be enormous, and the rigor this practice would impose on journalists would likely help them tame some of their more loose standards and metaphors. If we do it, we won’t need Bob Dylan to save us – we’ll be helping ourselves.

4 Comments

  1. Our experience includes the following:
    – Pop sci is an oxymoron. Like pop medicine, pop engineering and pop nuclear sci. It ain’t gonna happen.
    – Pop coverage of science creates far more pushback, blowback and attacks on science and individual scientists than support or even basic comprehension.
    – To suggest real evidence-based facts, theories and research can be even preliminary understood without serious and long-term study and learning is a sales lie. It’s a falsehood promoted by folks trying to sell you something. JL is an example, Gladwell another. It is ideology hiding from facts.
    – Science is getting much, much more complicated and dense. We are professional marketers and spend a lot of time with brain science and see it becoming increasingly comprehensible to anyone but the most dedicated specialists.
    – Natural language talk about evidence-based knowledge is pointless and leads to increasingly useless talk. Calculus and advanced math is the language of science — for necessary reasons. For example, in brain research “higher order concepts like — reward, value, personality, choice, emotions, consciousness, etc. — are proving complete dead-ends.

    In the US, there is this sales scam that everyman should be able to understand science. It’s a lie to make money and get eyeballs. Think of medicine. How much information value is there is pop medicine? In fact, it likely causes more damage.

    Anyone who tells you you can understand anything about an evidence-based, technical, professional topic without years of hard study — is selling you something.

  2. Excellent post. I was checking constantly this weblog and I am inspired!
    Very helpful information specifically the remaining section :) I deal with such information much.
    I used to be looking for this certain info for a long time.
    Thank you and good luck.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s