A recent paper has been touted as showing that smokeless tobacco (ST, which mainly refers to oral snuff, which is sometimes called snus, and chewing tobacco) does not cause pancreatic cancer (PC), which is contrary to what some people believe. This is of little practical consequence, since even the highest plausible risk claimed for PC was only a fraction of 1% of the risk from smoking, and thus the claim had no effect on the value of ST in tobacco harm reduction. But there are several angles on this that are worth exploring here. (For those of you not familiar with my work on tobacco harm reduction – substitution of low-risk sources of nicotine like ST for smoking, which could have enormous public health benefits – you can find more background in our book, blog, and other resources at TobaccoHarmReduction.org.)
As a first observation, since this is a series about health news, I should point out that, as far as I know, the new article did not make the news. Since I cannot point to a news report for background reading, I recommend instead a good blog post by Chris Snowdon that summarizes it (and touches on a few of the themes I explore here).
It would be one thing if it did not make the news because it was not actually news (see below). But I doubt that most reporters would have realized that, so the obvious explanation does not speak well of the press. News that contradicts conventional wisdom is likely to be highlighted because it is more entertaining, but not if it is an inconvenient truth for those who control the discussion, in which case it stands a good chance of being buried. Since the anti-tobacco activists who dominate the discourse in these areas want to discourage smokers from switching to low-risk alternatives (yes I know that sounds crazy, but it is true – it is beyond the present scope, but I cover it elsewhere), they prefer people to believe that ST is riskier than it really is.
Second is the "um, yeah, we already knew that" point. Those of us who follow and create the science in this area have always known that the evidence never supported the claim of any substantial risk of PC from ST. (An important subpoint here is that an empirical claim of "does not cause" should be interpreted as meaning "does not cause so much that we can detect it". For an outcome with many causes, like cancer, and an exposure that affects the body in many ways, it is inevitable that if enough people are exposed at least one will get the disease because of the exposure. It is also inevitable that at least one person will be prevented from getting the disease because of the exposure. So what we are really interested in is whether the net extra cases are common enough that we can detect them.)
There have been a three or four studies whose authors claimed to have found an association between ST use and PC. Other studies found nothing of interest and there must be dozens or perhaps hundreds of datasets that include the necessary data, so the lack of further publications suggests that no association was found in these. There was never a time that a knowledgable and honest researcher reviewing the available information would have been confident in saying there was a substantial risk. One of the studies that claimed to find an association, published by U.S. government employees, was a near-perfect example of intentionally biased analysis; they actually found that ST users had lower risk for PC but figured out how to twist how they presented the results to imply the opposite. Two somewhat more honest studies each hinted at a possible risk, but each provided very weak evidence and they actually contradicted each other. Only by using an intentionally biased comparison (basically cherrypicking a high number from a different analysis of each dataset, because if similar methods were used they got very different results) could activists claim that these studies could be taken together as evidence of a risk. Several of us had been pointing this out ever since the second of these studies were published; see the introduction (by me) and main content (by Peter Lee) of Chapter 9 of our book (free download) for more details.
The worst-case-scenario honest interpretation of the data is that there are a few hints that perhaps there is some small risk, but it is clearly quite small and when all we know is considered the evidence suggests there is no measurable risk. In other words, if the new report had made the news, it would have been portrayed as a new discovery that contradicted old beliefs. But only people who did not understand the evidence (or pretended to not understand the evidence) ever held those old beliefs.
One clue about why this would be is that the study was a meta-analysis, which refers to methods of combining the results from previous studies. While some people try to portray such studies as definitive new knowledge, such a study cannot tell anyone who already understood the existing evidence anything they did not already know. They are just a particular way of doing a review of existing knowledge, usually summarizing our collected previous knowledge with a single statistic. In some cases, such as when the body of evidence is really complicated and fragmented (e.g., there are hundreds of small studies), this can be useful. That might be a case where no one actually could understand all the existing evidence because it was too big to get your head around. But doing a meta-analysis is not fundamentally different from graphing your results differently or presenting a table a different way – it might reveal something you overlooked because of the complicatedness of the information, but it cannot create new information.
So when the information we already have is rather limited and simple, as it is for the ST-PC relationship, there is no way this meta-analysis of a handful of studies could have told us anything new. Anyone who learned anything from the new study must have not known the evidence. This makes the new paper a potentially useful convenient summary, but many of those already existed, so there was no value added.
[There are other problems that make meta-analyses much less definitive than they are made out to be, including some serious conceptual problems with the most common approach. That single summary statistic has some big problems. But I will save elaboration on these points for later posts.]
Third, given that, you might wonder why some people think this was news. I have already pointed out that activists wanted to portray ST as more harmful than it really is.
A few years ago, those anti-ST activists who wanted to maintain a modicum of credibility realized they could no longer claim that ST caused oral cancer (they came around to this conclusion about ten years after the science made it clear that there was no measurable risk). While clueless activists, and those who do not care about even pretending to be honest, still make that claim about oral cancer, their smarter colleagues went searching for other claims where the evidence was not so well known.
But a quick web search reveals that the claims about pancreatic cancer risk from ST are stated as fact by anti-tobacco activists, as expected, and by electronic cigarette merchants, which I suppose is understandable marketing dishonesty, but also by some companies that make smokeless tobacco. The latter are apparently motivated by a fear of ever denying that their products cause health effects, even health effects that their products do not actually cause. It does escape me why, exactly, they felt compelled to overstate the support for the claim that ST causes PC, rather than perhaps just acknowledging that it has been claimed, not attempting to dispute the claim but also not bolstering it. I know they had the expertise to know the truth, and urged some of them to stop actively supporting the disinformation, but it had little effect. Maybe they thought they benefitted from the incorrect beliefs in a way that was too subtle for my politically-naive brain.
The more general observation from this is that accurate science per se does not have much a constituency. If someone has a political motive to misrepresent the science, like the anti-tobacco extremists do in this case, they will do so. Perhaps there will be a political competitor who will stand up for scientific accuracy by voicing the opposite view. But if there are no political actors on one side of the fight, or they are intimidated into not standing up to the junk science as in the present case, then we are left only with those of us who want to defend scientific accuracy for its own sake. Needless to say, we do not have the press offices that wealthy activists groups, governments, and companies have, so we have little impact on the news. This is especially true because most health news reporters have no idea who to ask for an expert opinion about the accuracy of a claim, so they usually just find the political spokesmen (some of whom are cleverly disguised as scientists).
Fourth, and most important for the general lessons of this series, is that the new paper exemplifies the fact that there is basically no accountability in health science publishing. This is a particularly destructive aspect of the preceding observation about accurate science not having a constituency. In many arenas, adamantly making a claim that turns out to be wrong is bad for your reputation and career. This is obviously not true everywhere – American right-wing political rhetoric is the example that currently leaps to mind – though you might expect it to be so in science. Unfortunately, it is not in public health science.
The senior author of the new paper (considered ultimately responsible for oversight; that is what being listed last of the several dozen "authors" of a paper usually means) is Paolo Boffetta. Boffetta is personally responsible for much of the junk science and disinformation about ST and PC. He was the lead author of one of the two not-really-agreeing studies mentioned above, a major player in the International Agency for Research on Cancer (IARC) report that constructed misleading evidence of cancer risk from ST, and author of a completely junk meta-analysis that engaged in the dishonest cherrypicking I mentioned above. I would love to go through the entire indictment of him, but I have been urged to keep my word count down a bit, so I will refer you to the above links, the post by Snowdon and Lee's article that is reprinted in the book chapter, as well as this recent post by Brad Rodu.
Instead I will focus on the point that since publishing in public health science is treated purely as a matter of counting-up scorekeeping by many, no one pays any attention to whether someone is producing junk science or even utter nonsense. If you are someone like Boffetta who "authors" more papers than anyone could seriously analyze, let alone write, no one cares that you could not possibly be doing any thinking about their content – they just say "wow, look at that big number", since assessing quality is beyond the abilities of the non-scientists who occupy most senior positions in public health academia and government. They do not even care (or notice) that someone's publication record for the last few years contains flat-out contradictions, like the various reports by Boffetta listed here (and it gets even better – during the same period he was also first author of a polemic that called for more honest research in epidemiology and condemned science-by-committee of the type he engaged in on about ST).
If you are thinking that things cannot really be that bad, I have to tell you that they are even worse.
The above describes what is typical for most of the best known (I did not say best respected) researchers in public health science, like those closely associated with the Nurse's Health Study I mentioned a few days ago. They crank out far more papers than they could possible hope to do well or even think through, and these are what you read about in the news. Indeed, you are more likely to read about these mass-produced studies in the news because the authors are more famous – famous for cranking out a zillion often quite lame studies.
Down in the less-rarified end of the field, it can get just as ugly. I have observed ambitious (in the bad sense of the term) colleagues in public health, trying to climb the ladder explicitly making deals to put each other's names on their papers, as authors, even though the other contributed nothing to the paper and had no idea whether it was accurate. Slipping a sixth author into a list of five does not penalize anyone's credit (though it obviously should), but let someone boost his numbers knowing no one would ever ask him to defend the content of the paper. On a few occasions I or one of my colleagues who actually cares about science have asked a guest lecturer (often someone who is applying for a faculty job in our department) to explain or justify an analysis in one of their recent papers that we disagreed with, and were later told that actually challenging someone's claims was considered impolite. (These people would have never survived graduate school in the fields I studied!)
A lot of critics who do not really understand the field call epidemiology junk science, but typically their condemnations are based on ignorance. The truth is worse.
I wish I could conclude this point with some optimistic note of "so what you need to do as a reader is…", but I do not have one. The one bright spot that occurs to me is that when I work as an expert witness the health science "experts" on the other side are seldom anyone who has really worked in the area since, given the quality of typical public health articles, if they had written much they probably would have published and stood by numerous errors that would undermine their claims of expertise.
Bringing this back to a few take-away points: If someone claims to have discovered an existing belief is wrong, particularly if this is based on a simple review of the evidence, chances are that either (a) the new claim is wrong, or (b) the real experts did not actually have the incorrect belief. For a politicized issue (one where any significant constituency cares about the scientific claim for worldly reasons), you are unlikely to get an accurate view of the science unless you hear from a scientific expert who supports the opposition view. If such a person says "I do not like this, but I cannot dispute the claim", you have learned a lot; if they are merely given a meaningless soundbite in a news story then you have only learned about the bias of the reporter and have not heard the counter-argument. If you hear a counter-argument, that is where the tough part begins – for both your analysis and my attempts to empower you. I start on that tomorrow.
As a first observation, since this is a series about health news, I should point out that, as far as I know, the new article did not make the news. Since I cannot point to a news report for background reading, I recommend instead a good blog post by Chris Snowdon that summarizes it (and touches on a few of the themes I explore here).
It would be one thing if it did not make the news because it was not actually news (see below). But I doubt that most reporters would have realized that, so the obvious explanation does not speak well of the press. News that contradicts conventional wisdom is likely to be highlighted because it is more entertaining, but not if it is an inconvenient truth for those who control the discussion, in which case it stands a good chance of being buried. Since the anti-tobacco activists who dominate the discourse in these areas want to discourage smokers from switching to low-risk alternatives (yes I know that sounds crazy, but it is true – it is beyond the present scope, but I cover it elsewhere), they prefer people to believe that ST is riskier than it really is.
Second is the "um, yeah, we already knew that" point. Those of us who follow and create the science in this area have always known that the evidence never supported the claim of any substantial risk of PC from ST. (An important subpoint here is that an empirical claim of "does not cause" should be interpreted as meaning "does not cause so much that we can detect it". For an outcome with many causes, like cancer, and an exposure that affects the body in many ways, it is inevitable that if enough people are exposed at least one will get the disease because of the exposure. It is also inevitable that at least one person will be prevented from getting the disease because of the exposure. So what we are really interested in is whether the net extra cases are common enough that we can detect them.)
There have been a three or four studies whose authors claimed to have found an association between ST use and PC. Other studies found nothing of interest and there must be dozens or perhaps hundreds of datasets that include the necessary data, so the lack of further publications suggests that no association was found in these. There was never a time that a knowledgable and honest researcher reviewing the available information would have been confident in saying there was a substantial risk. One of the studies that claimed to find an association, published by U.S. government employees, was a near-perfect example of intentionally biased analysis; they actually found that ST users had lower risk for PC but figured out how to twist how they presented the results to imply the opposite. Two somewhat more honest studies each hinted at a possible risk, but each provided very weak evidence and they actually contradicted each other. Only by using an intentionally biased comparison (basically cherrypicking a high number from a different analysis of each dataset, because if similar methods were used they got very different results) could activists claim that these studies could be taken together as evidence of a risk. Several of us had been pointing this out ever since the second of these studies were published; see the introduction (by me) and main content (by Peter Lee) of Chapter 9 of our book (free download) for more details.
The worst-case-scenario honest interpretation of the data is that there are a few hints that perhaps there is some small risk, but it is clearly quite small and when all we know is considered the evidence suggests there is no measurable risk. In other words, if the new report had made the news, it would have been portrayed as a new discovery that contradicted old beliefs. But only people who did not understand the evidence (or pretended to not understand the evidence) ever held those old beliefs.
One clue about why this would be is that the study was a meta-analysis, which refers to methods of combining the results from previous studies. While some people try to portray such studies as definitive new knowledge, such a study cannot tell anyone who already understood the existing evidence anything they did not already know. They are just a particular way of doing a review of existing knowledge, usually summarizing our collected previous knowledge with a single statistic. In some cases, such as when the body of evidence is really complicated and fragmented (e.g., there are hundreds of small studies), this can be useful. That might be a case where no one actually could understand all the existing evidence because it was too big to get your head around. But doing a meta-analysis is not fundamentally different from graphing your results differently or presenting a table a different way – it might reveal something you overlooked because of the complicatedness of the information, but it cannot create new information.
So when the information we already have is rather limited and simple, as it is for the ST-PC relationship, there is no way this meta-analysis of a handful of studies could have told us anything new. Anyone who learned anything from the new study must have not known the evidence. This makes the new paper a potentially useful convenient summary, but many of those already existed, so there was no value added.
[There are other problems that make meta-analyses much less definitive than they are made out to be, including some serious conceptual problems with the most common approach. That single summary statistic has some big problems. But I will save elaboration on these points for later posts.]
Third, given that, you might wonder why some people think this was news. I have already pointed out that activists wanted to portray ST as more harmful than it really is.
A few years ago, those anti-ST activists who wanted to maintain a modicum of credibility realized they could no longer claim that ST caused oral cancer (they came around to this conclusion about ten years after the science made it clear that there was no measurable risk). While clueless activists, and those who do not care about even pretending to be honest, still make that claim about oral cancer, their smarter colleagues went searching for other claims where the evidence was not so well known.
But a quick web search reveals that the claims about pancreatic cancer risk from ST are stated as fact by anti-tobacco activists, as expected, and by electronic cigarette merchants, which I suppose is understandable marketing dishonesty, but also by some companies that make smokeless tobacco. The latter are apparently motivated by a fear of ever denying that their products cause health effects, even health effects that their products do not actually cause. It does escape me why, exactly, they felt compelled to overstate the support for the claim that ST causes PC, rather than perhaps just acknowledging that it has been claimed, not attempting to dispute the claim but also not bolstering it. I know they had the expertise to know the truth, and urged some of them to stop actively supporting the disinformation, but it had little effect. Maybe they thought they benefitted from the incorrect beliefs in a way that was too subtle for my politically-naive brain.
The more general observation from this is that accurate science per se does not have much a constituency. If someone has a political motive to misrepresent the science, like the anti-tobacco extremists do in this case, they will do so. Perhaps there will be a political competitor who will stand up for scientific accuracy by voicing the opposite view. But if there are no political actors on one side of the fight, or they are intimidated into not standing up to the junk science as in the present case, then we are left only with those of us who want to defend scientific accuracy for its own sake. Needless to say, we do not have the press offices that wealthy activists groups, governments, and companies have, so we have little impact on the news. This is especially true because most health news reporters have no idea who to ask for an expert opinion about the accuracy of a claim, so they usually just find the political spokesmen (some of whom are cleverly disguised as scientists).
Fourth, and most important for the general lessons of this series, is that the new paper exemplifies the fact that there is basically no accountability in health science publishing. This is a particularly destructive aspect of the preceding observation about accurate science not having a constituency. In many arenas, adamantly making a claim that turns out to be wrong is bad for your reputation and career. This is obviously not true everywhere – American right-wing political rhetoric is the example that currently leaps to mind – though you might expect it to be so in science. Unfortunately, it is not in public health science.
The senior author of the new paper (considered ultimately responsible for oversight; that is what being listed last of the several dozen "authors" of a paper usually means) is Paolo Boffetta. Boffetta is personally responsible for much of the junk science and disinformation about ST and PC. He was the lead author of one of the two not-really-agreeing studies mentioned above, a major player in the International Agency for Research on Cancer (IARC) report that constructed misleading evidence of cancer risk from ST, and author of a completely junk meta-analysis that engaged in the dishonest cherrypicking I mentioned above. I would love to go through the entire indictment of him, but I have been urged to keep my word count down a bit, so I will refer you to the above links, the post by Snowdon and Lee's article that is reprinted in the book chapter, as well as this recent post by Brad Rodu.
Instead I will focus on the point that since publishing in public health science is treated purely as a matter of counting-up scorekeeping by many, no one pays any attention to whether someone is producing junk science or even utter nonsense. If you are someone like Boffetta who "authors" more papers than anyone could seriously analyze, let alone write, no one cares that you could not possibly be doing any thinking about their content – they just say "wow, look at that big number", since assessing quality is beyond the abilities of the non-scientists who occupy most senior positions in public health academia and government. They do not even care (or notice) that someone's publication record for the last few years contains flat-out contradictions, like the various reports by Boffetta listed here (and it gets even better – during the same period he was also first author of a polemic that called for more honest research in epidemiology and condemned science-by-committee of the type he engaged in on about ST).
If you are thinking that things cannot really be that bad, I have to tell you that they are even worse.
The above describes what is typical for most of the best known (I did not say best respected) researchers in public health science, like those closely associated with the Nurse's Health Study I mentioned a few days ago. They crank out far more papers than they could possible hope to do well or even think through, and these are what you read about in the news. Indeed, you are more likely to read about these mass-produced studies in the news because the authors are more famous – famous for cranking out a zillion often quite lame studies.
Down in the less-rarified end of the field, it can get just as ugly. I have observed ambitious (in the bad sense of the term) colleagues in public health, trying to climb the ladder explicitly making deals to put each other's names on their papers, as authors, even though the other contributed nothing to the paper and had no idea whether it was accurate. Slipping a sixth author into a list of five does not penalize anyone's credit (though it obviously should), but let someone boost his numbers knowing no one would ever ask him to defend the content of the paper. On a few occasions I or one of my colleagues who actually cares about science have asked a guest lecturer (often someone who is applying for a faculty job in our department) to explain or justify an analysis in one of their recent papers that we disagreed with, and were later told that actually challenging someone's claims was considered impolite. (These people would have never survived graduate school in the fields I studied!)
A lot of critics who do not really understand the field call epidemiology junk science, but typically their condemnations are based on ignorance. The truth is worse.
I wish I could conclude this point with some optimistic note of "so what you need to do as a reader is…", but I do not have one. The one bright spot that occurs to me is that when I work as an expert witness the health science "experts" on the other side are seldom anyone who has really worked in the area since, given the quality of typical public health articles, if they had written much they probably would have published and stood by numerous errors that would undermine their claims of expertise.
Bringing this back to a few take-away points: If someone claims to have discovered an existing belief is wrong, particularly if this is based on a simple review of the evidence, chances are that either (a) the new claim is wrong, or (b) the real experts did not actually have the incorrect belief. For a politicized issue (one where any significant constituency cares about the scientific claim for worldly reasons), you are unlikely to get an accurate view of the science unless you hear from a scientific expert who supports the opposition view. If such a person says "I do not like this, but I cannot dispute the claim", you have learned a lot; if they are merely given a meaningless soundbite in a news story then you have only learned about the bias of the reporter and have not heard the counter-argument. If you hear a counter-argument, that is where the tough part begins – for both your analysis and my attempts to empower you. I start on that tomorrow.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.