07 January 2011

Unhealthful News 7 – a breath of fresh air about peer review

Wow.  My first week of reporting about unhealthful news and I have to interrupt to report on a great set of analyses that explain how little "published in a peer reviewed journal" means.  This is quite relevant to the series, though, because naive deference among non-scientists to a claim just because it is in a peer reviewed journal accounts for a large part of why the health news is so unhealthful.

Yesterday I wrote about the reporting in the New York Times of a study by Daryl J. Bem (at the time of this writing a free version of the research paper is available here) that purportedly provided evidence of a slight tendency of people to be able to see into the future (or, perhaps more precisely, though no one seems to have put it this way, for future events to affect people's actions/thinking).  I noted how naive the reporter and those he chose to quote were about the nature of peer review.  Today the NYT ran a series of comments by scientists and others who think about science on the topic, and it was incredibly refreshing.  This is not because I expected scientists to not understand the limits of peer review and the nature of science.  Rather, after being immersed in health research for so long, it is nice to be reminded that most of science is populated by scientists, people who understand these things, rather than the technicians, clinicians, reporters, and activists who swamp the actual scientists in the areas where I work.

I and my colleagues have written a lot about the topic of peer review in the health sciences.  It seems that remarkably few people in our field (both producers and consumers of research results) recognize the points that almost every commentator in this collection presented with a tone of common knowledge.

Arizona State University physics professor Lawrence M. Krauss led off with:

 Part of the problem here is the assumption that when research is published, via the peer review process, that it is therefore correct. This is a fallacy. Lots of garbage ends up in peer-reviewed journals. All that successfully getting published means is that you have survived some sort of peer review. This is, by necessity, random and highly variable and arbitrary. 
…."publication" is not some sacred mantle, and the public should know that. Scientists already do. When I scan the scientific literature I find lots of results that I am reasonably sure are garbage and ignore them. The public should be skeptical of all such results, as should scientists, and most of us are trained to be skeptical in this way.
To focus on the relevance of this to unhealthful news, there appears to be woefully little such training in health science.  A good epidemiology doctoral program teaches that, but such programs produce only a fraction of 1% of the people publishing health science.  Medical school, which produces a lot of the people publishing in the field, teaches blind acceptance and emphasizes factual knowledge, not scientific reasoning.  The "health promotion" programs that produce most public health graduates are worse.  And even most epidemiology education doctoral programs fall far short of the standards demanded in other social sciences.  The purely natural science side of health research is probably better, except when they try to make worldly (policy, social science) conclusions, when they are probably even worse than the others.

Science book author Ben Goldacre contributes,

But in general, with the exception of academic papers that are plainly deranged, I’m always nervous about the idea that we should effectively censor improbable or poor quality research from the academic record. …. We should always remember that academic papers are technical documents which are there to be read critically, and interpreted cautiously, by people who understand them, and ideally know something of the background: no single study is supposed to be a grand sweeping statement about whether a phenomena is real or not.
University of Hertfordshire psychology professor Richard Wiseman advises:
There is no need to go into panic mode and either overturn the laws of physics or label the original work as badly flawed. Instead, it is time to do what science does best -- take the long view and withhold judgment until the evidence is in.
This may be a bit too optimistic, however.  This for several reasons, including what University of California, Santa Barbara psychology professor Jonathan Schooler wrote:
...the peer review process restricts open access to scientific findings. This necessarily subjective vetting procedure produces systematic bias in that a sizable proportion of scientific studies by qualified researchers are unavailable for consideration. .…[B]ecause peer review favors significant findings, we cannot know how many similar unsuccessful studies might exist. At the same time, because peer review has an important subjective component, we do not know how many successful studies with similar conclusions were rejected as a result of bias. As the present acrimony illustrates, there is a strong prejudice against this type of research, regardless of the rigor with which it is carried out.  One partial solution would be the development of an open repository of scientific findings that encourages researchers to rigorously log their methodology and predictions beforehand, and then report all of their results regardless of outcomes afterward.
The idea of encouraging (indeed, simply making possible) publication of working papers in epidemiology is one that I pursued in my last year or two before leaving academia, though left before I could get it established.  The idea is to make sure that research circulates to other scientists who might make use of it, regardless of whether someone wants to censor it or thinks it is "interesting" enough to sell journals.  Circulating working papers would also tend to reduce unintentional junk science by allowing authors to learn of problems that readers notice and correct them before they are etched in stone.  The complete nonexistence of a working paper culture, and thus depending entirely on the barely-functional journal peer review system to substitute for real peer review, is one of the reasons health science has so much more junk than other fields.  Maybe I will get back to that project someday.

Philosopher Anthony Gottlieb (whose history of philosophy, The Dream of Reason, is a great book, btw), wrote: 

It would probably come as news to most people that peer-reviewed studies are only rarely double-checked in any effective way. The more you know about scientific method, the less surprising it becomes that so many papers turn out to be flawed, especially in medicine.
That speaks for itself.  After discussing the level of evidence needed to establish claims that we might call miracles, he concludes with:
It certainly doesn’t follow, though, that the Journal of Personality and Social Psychology should not have accepted his paper. To insist that the evidence in a publishable study be so strong that it would be even more of a miracle than the existence of ESP if its conclusion were false would be much too high a bar to set. With standards that exacting, science would grind to a halt. Besides, publicizing the study is the most reliable way to elicit critiques of it, and thus find out exactly what is going on.
That is perhaps a bit too optimistic, juxtaposed with the previous reference to medicine, since the health science literature is almost devoid of substantial critiques.  It is almost a series of unrelated monologues, which is why there is no penalty for publishing junk.

It is interesting that the NYT could not find any serious dissent from these points.  One of the other comments was from a professor who is part of the "debunker" club, identified a fellow of the Committee for Skeptical Inquiry, and thus has a staked-out political position against allowing the publication in question.  But even he did not argue for the censorship of dissenting views, a phenomenon that is all too common in health science.   Only one commentator, who is best known for his popular writing which primarily endeavors to prove that he is the cleverest guy in the room, insisted that the article should have been censored.  But his comment consisted mostly of a long "clever" example that really had little to do with the matter at hand and pretty much ignored the concepts of scientific inquiry the others wrote about.

Now that the NYT has published this excellent assessment of the limits of the peer review system, I am sure we can expect their health and science reporters to stop naively referring to peered reviewed publications as if they were uniquely informative among scientific information, and they certainly will not report on new junk science health studies in journals as if they were fact.

Bazinga!  Just kidding.  I am confident that I could finish a year of blogs about unhealthful news without even looking beyond the NYT.  [Update, a few months later:  I could have if they had not gone to a paywall -- now I am rather hesitant to send readers there too much.]

Finally, it occurs to me that peer review is the broadcast network news of science.  Those thoughts (it flourished for the latter half of the 20th century, but is now trusted only by those at retirement age or who are not really serious about the news, etc.) demand a post of their own, however, so maybe in a few weeks.

[Shout-out to Philip Alcabes's blog.  I thought I should write something in this series about the current maddening flurry of Wakefield-related vaccine-autism stories, but could not come up with a thesis point.  Phil found a focus in how the political circus accomplishes nothing for science or health.]

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.