08 January 2011

Unhealthful News 8 – lying via use of arbitrary cutpoints (a nice example from the anti-tobacco extremists)

Brad Rodu already did the research and calculations on this one.  I am adding some comments about the epistemology.  In his latest post, Brad wrote about the collection of misleading statements by Matthew Myers (who is the Campaign for Tobacco Free Kids) about new dissolvable smokeless tobacco products.  Brad identifies several absurd and dishonest points, but I will focus on just one.

Myers claimed that there was a 39% increase in smokeless tobacco use among children since 2006.  He made up calculated that number using the Monitoring the Future Survey, choosing 2006 as the starting year because there was a downward blip in the annual statistics that year, making it unusually low, and thus making any comparison to a future year look like an increase.  In reality, as Brad points out, the results of that survey have fluctuated up and and down.  A comparison to 1999 would show no increase in 2009.  An additional point that Brad did not add is that using this one survey, a rather odd one, rather than looking across the many datasets available that measure the same time series is equally cherrypicking.

I will set aside the inappropriate precision (saying 39% implies we have far better measures than we really do, so rounding to 40% would be much more honest) until later in this series and focus on the cherrypicking.  This form of publication bias in situ (choosing, from all the results of a study that could have been reported, to report a result that is biased) is one that I and my colleagues have done extensive work on.  (The most accessible example of that is here.)

The nature of the problem is this:  In most studies (even those as simple as Myers's calculated change in prevalence) it is necessary to make some arbitrary choices about who to count (what time range, what age groups, what geographic area), as well as choices like how to define the exposure or the outcome.  Due to random error alone these choices will affect the reported results (e.g., a survey will produce a slightly higher result one year than another, due to luck of the draw about who was surveyed, even if there was no real change in the population).  But a choice has to be made about which years, etc., to use.  This is a genuine challenge facing epidemiology and other complicated sciences.  The need to make these choices creates various complicated methods and standards that honest researchers need to consider.  Because these are complicated and invisible to casual consumers of the research, it is very easy for dishonest researchers to create bias.

Here is the key takeaway point:  If that arbitrary choice is made without regard to the results it produces, then there will still be error (no study exactly replicates the real world numbers), but it will be random.  (Random errors are those that are captured in such statistics as "confidence intervals", "statistical significance", and "margin of error".)  But if the choices are made because they produce particular results then the error is not random, it is biased in the direction that the author prefers (and, as a more technical point, those random error statistics become meaningless).


(To complicated things a bit, matters become worse if there are non-random fluctuations also.  The best example right now is that anti-government-spending activists in the U.S. are claiming, falsely, that there has been a surge in employment by the government in the last couple of years.  What "evidence" do they use?  They look at the peak of the very temporary hiring of workers for the decennial census in 2010, a huge upward blip that is long-since gone, and compare that to some point in the previous 9.5 years when there was no census going on.  Different arena, but same playbook as the anti-tobacco extremists.)
 

In short, dishonest researchers/analysts can make choices that appear to be arbitrary and unbiased but are actually designed to cook up the answer they prefer.  These choices are the same ones that produce the Texas sharpshooter problem I wrote about a few days ago in the context of cancer clusters.  In cases like that, where local residents are worried about elevated cancer, one can sympathize with their biased search to show the risk is higher.  But we should not forgive researchers who skew their choices a bit to try to exaggerate their results to make them look more interesting.  They create a bias (of unknown magnitude but probably rather large) in the literature, since they tend to skew toward exaggerating associations finding associations that do not really exist.  But worse still are people like Myers who blatantly take advantage of this challenge to do good science.  Other examples from the same quarter can be found in Christopher Snowdon's extensive chronicling, in his blog, of how "researchers" have used statistical tricks to exaggerate of the effects of second hand smoke.

What Myers and his ilk do is not science.  It is not honest error.  It is lying, which is to say, it is intentionally trying to cause someone to believe something that is not true (e.g., that there is some huge upward trend in underage use of smokeless tobacco).  It may seem impolite to phrase it this way, but it is far more impolite to try to manipulate people into believing something that is false.  Such statistical games are just as dishonest simply making up a number.  Indeed, in several ways it is worse:  Not only is he making up the claim, which could actually be correct if he just made it up without looking at the numbers, but we know he has looked at the numbers, and so knows his claim is misleading.  Additionally, he is further lying about the message he is sending being supported by the evidence. 

This is pure marketing, marketing by someone who makes his living by trying to convince people to buy what he is selling (in this case, he is selling his effort to keep low risk nicotine products off the market and thereby make sure that cigarettes remain the dominant source of nicotine – a perverse goal of a particular political faction that I have analyzed at length in my work on that topic and that will probably appear in this series if you keep reading).  It is no different from the maker of a deodorant claiming that women (or men – whatever you prefer) will be falling all over you if you use the product.  At least in the case of the deodorant it seems safe to assume people will know they are being bullshitted.  Perhaps it is more like the infamous "more doctors smoke Camel", which was based on research that was equally dishonest.  (Though the Camel study was somewhat more clever, really:  They passed out Camels outside a medical convention hall and then conducted a survey half a block down the street asking what brand the many then-smoking physicians were using.  If they had had epidemiology databases available, RJR probably could have saved some trouble and just used Myers's method).

Of course, Myers is merely the poster boy for this kind of lying from the week's news.  Such lying occurs all the time in the reporting of health claims.  More modest forms of it can be found in the work of fairly honest researchers who want to pump up their results a bit, most of whom do not understand statistics well enough to realize they are doing something wrong (I have known a lot of them).  


So what are we to do?  In our research linked above, we have proposed some reporting methods and other rules that can help keep honest researchers from producing misleading results like these.  But what can the average consumer of health research claims do?

I have a few suggestions.  Look for evidence of arbitrary choices of study parameters (like choosing the baseline year 2006).  If there is no explanation for such choices, and there are no alternative numbers reported (i.e., they do not also compare 2000 and 2003, doing what is often called a "sensitivity analysis" to see if the result is sensitive to a particular choice that was made), then you should assume that the reported result has been exaggerated.  That is a start, though obviously not perfect.  If 2000 had been the year with a downward blip then it would not be so obviously strange (a nice round number) as the choice 2006, which just cries out "beware! intentionally biased analysis!".  Also beware of explanations the sound good until you think about them.  Myers could have written "in 2006, before these new products went on the market", a typical ploy, but a moment's though will remind you that 2005 and 1999 were also before these products went on the market.  (It is interesting to note that he did not even bother with such an excuse, counting on naive reporters to dutifully report the claim without any need for justification.)

Also, if you are interested in figuring out the truth, rather than just noticing the clues that show you should doubt the claim, it can often be done with very little work.  The dataset that Myers used is public, so any reader (or any health news reporter!), upon thinking "hmm, why 2006" could look at the data and learn exactly why he chose that year.  Of if the dataset in question is not public, some other dataset that measured the same thing might be.  Of course this only works if the claim was relatively simple.  If it is buried in complicated statistical calculations, this becomes more difficult, but it probably possible to get someone to do it if matters and you can access the data.  If you can send it to me, I might try to make a post out of it.


[As an unrelated aside, I am disappointed to report that "app" beat out "junk" and other competitors to be chosen Word of the Year by the American Dialect Society.  I, for one, intend to continue to use use junk far more than app.]

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.