Occasionally, but surprisingly rarely, I wonder if I should have given myself a few personal days off during my 365 Days of Unhealthful News. I have no problem with High Holidays or even the day I had to post from my phone in the minutes before a plane took off, but it is really difficult to monitor the health news during Philly Beer Week. So today I will write about an observation made (over a few beers, naturally) by my friend and colleague, Igor Burstyn, a fellow refugee from the collapsed public health sciences program at the University of Alberta.
The observation (my paraphrase, interpretation, and substantial embellishment, so do not hold him to this) was that you do not do research to try to prove a point. You do research to try to figure out what is going on in society/the world/the universe/etc. If you are quite sure of what is true, and you are are just trying to demonstrate that, then you should be doing something other than research, like education or activism. Demonstrations are what the "experiments" in grade school science class are. It is not what real researchers do.
The immediate corollary is that researchers who set out to prove a particular point are not really being researchers. They are engaged in research kabuki, in which they act out a set-piece that uses the language of research, but is really glorified rhetoric. Note that this differs from someone who has a strongly held hypothesis that they want to test; such researchers are just as interested in figuring out if they are wrong, even though they strongly doubt it is true. I am not sure I can offer a simple explanation for spotting the difference, but it is easy to recognize when you are in and around it.
A more subtle corollary is that if someone has a major conflict of interest problem, chances are what they are doing does not really qualify as research. It is not merely that they might be biased in their methods or analysis. It is that if they are so biased about what they want the research to show, then they must have expected it to show that, and have just been trying to demonstrate it (if they were biased about what they wanted and they did the research, they must have been predicting they would get the particular outcome), and thus they were performing a demonstration, not doing research.
To tie this to recent news, rather than making it just an abstract discussion, I will suggest – in a blatant act of laziness and promotion – checking out today's tobacco harm reduction weekly reading list. The readings from Michael Siegel and Jeff Stier give some examples that exemplify the difference between research and advocacy that is posing as research, in contrast with the research by the Global Commission on Drug Policy that set out to really figure out whether the Drug War is working.
It would be overly simplistic to suggest an equivalence or otherwise over-interpret this. Someone who is genuinely curious can certainly still be biased by their worldly preferences, though the genuine curiosity should reduce any effect. And someone who is interested in proving a point might still act as an honest scientist, prepared to discover that the point is wrong. But it is probably not over-interpreting this line of reasoning to say that the bias deriving from conflict of interest manifests primarily at the study initiation and design stage, and not so much in the analysis. (Of course, many times the bias built into the design can only be recognized by looking at the analysis, since sometimes the design is "fish around until you find a model that best supports the goal".) By the time a researcher with serious conflict of interest – say someone who is seeking to show that an intervention they favored was effective – is analyzing the data, they have already set up the demonstration. Any study that would legitimately have failed to find the desired phenomenon if it did not exist – real research – would be designed differently from one that seeks to demonstrate something is true if at all possible.
For example, I was up most of last night finalizing proofs for a paper I posted about before. It was completely intended to be analytic, not research, arguing for a particular way of looking at the evidence that wind turbines cause human health problems. It is a research paper only in the sense that anything analytic that appears in journals gets called that: I knew what answer I was going to present from the start. So when I wrote my COI statement, I did not hesitate to describe, matter-of-fact, that I do work as a testifying expert on behalf of communities fighting the siting of local wind turbines. I had written a paper that was designed from the start to argue (sort of like demonstrating) how to think about the matter I worked on. By contrast, when I do research, I generally object to simply listing funding I have received as if that were COI. I will identify the motive for the study, but if that was not biased and the study was not designed to produce a particular result, then there cannot be a serious COI problem. Most "researchers" do not explain the motive for their study or really describe their methods, so they may well be hiding a genuine COI problem; it is all a matter of whether you honestly describe what you have done.
One easy lesson is never trust anyone who says "this research will show…". They are no more doing research than the presenters at a kiddie "science" museum are doing science when they demonstrate how to make something go boom. Demonstrations can be interesting, but they are not research.
A bad time for a bad obesity prediction
-
All you have to do to produce an obesity forecast that will be published in
a medical journal is draw a straight line through the recent past and push
it...
22 hours ago
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.