Two different pieces in this morning's health news reminded me that there are a few good health reporters out there. The best of them is the New York Times's Gina Kolata, author of those two stories. She is one of the relatively few health reporters who have a strong science background. (Do not confuse this with the many part-time or full-time reporters with clinical degrees; medical and other clinical education, even at its best, is almost devoid of actual critical thinking about scientific research. There is simply no time to do anything other the memorize factoids and methods.) Of course, this is not a sufficient explanation, since other health reporters have science degrees and, of course, most people publishing health science do not seem to understand science. Kolata understands how to make sense of scientific results better than 90% (99%?) of the people producing scientific results.
This morning's news included a story from her on orthotics – the thingies you put in your shoes to angle, cushion, or support your feet properly – not the most exciting health news topic, though perhaps one of the most immediately practical for many readers. A superficial glance at the headlines and the newspaper's pull lead about the story suggest that the story was claiming that a single researcher was claiming that orthotics do not do any good. My instant reaction was that this would be an example of one of those "my research shows that what thousands or millions of people know from overwhelming experience is not actually true" stories. Then I noticed it was Kolata's work and read the whole thing.
After pointing out the "I could not find the particular effect I was looking for based on the methods I used"-type research that was the hook for the report, Kolata goes on to report on the overwhelming practical evidence that orthotics do help people with any number of foot and leg problems. Overall there is balance between those two sources of evidence, rather than the naive dismissal of one or the other that is common. She explains critical points, such as how past studies failed to follow basic rules of proper research - e.g., by simply ignoring study subjects who dropped out of the study (the technical term is: were lost to followup), which is a huge problem when they are likely to be biased (in this case, are likely to be the ones who are experiencing negative effects from the intervention).
Ultimately the conclusion is an example of excellent scientific thinking, and which shows the folly of naive yielding to Official Real Science, saying basically: These devices do make walking etc. a lot more comfortable for some people if they find one that works for them, though we cannot quite figure out why and general rules about who ought to do what seem to all be wrong, so people who might benefit should try out several options and go with what works. How often does the health news admit that a single rule for everyone is often poor advice.
Kolata's other story reported on research that showed that researchers doing clinical trials quite often failed to cite (and often apparently did not know about) previous trials done on the same topic. One of the study authors who has written several good analyses of fundamental flaws in the epidemiologic literature, Steve Goodman, remarked, "As cynical as I am about such things, I didn’t realize the situation was this bad." He hypothesized that researchers want to brag about their originality or were such bad researchers that they did not know how to figure out what had already been done.
This phenomenon, in addition to being bad science and leading to unethical research (in particular, it hurts study subjects without providing optimal knowledge) is partially to blame for the Unhealthful News. Goodman: "If the eighth study is positive, and the preceding seven were cold negative, is it proper to report that a treatment works?" Obviously the answer to that rhetorical question is "no", but you can guess what the news would report. As I noted in Unhealthful News 16, a sentence that begins, "Ours if the first study to show..." should end "...and therefore the result is probably wrong."
I have to mention one flaw I found in the reporting of the story: It alludes to how incredibly few of the studies the researchers reviewed properly cited the previous research, but never actually puts a number to that. Still, the description in the newspaper of the research methods was probably more complete than most of the methods sections in journal articles. It was refreshing to see an article that both reported effectively on the science, but covered what I personally think is the most important, or at least the most neglected given its importance, what is wrong with the rest of the research.
It concludes:
There are several steps along the way to a published paper where researchers might be asked about already published papers on the same topic. Those who finance the research, the ethics committees that review some studies and the journals that publish the studies all could ask the investigators how they assured themselves they had found prior relevant results. But, Dr. Goodman said, none of those groups feel any official responsibility.Though the context is a bit different, it is difficult to think of a better summary of the problems that cause Unhealthful News. Researchers, publicists, reporters, editors, high school science teachers, and others pursue their own goals and figure that it is someone else's job to keep readers of the health news from being misinformed, confused, unnecessarily scared, or generally resigned that it is all nonsense.
“It’s sort of a blind spot,” he said. “People sort of assume researchers know the literature, know their own field, know the studies.”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.