13 March 2011

Unhealthful News 72 - If the simple scientific conclusion is not correct, someone better be able to explain why

Today's Sunday exploration of how to figure out who to believe offers the lesson that if someone tells you not to believe your own eyes, they had better give you a good (which means both accurate and understandable) reason why.

It is often the case that simple observations just cannot address a scientific question.  Also, sometimes when simple, common-sense answers are possible, they lead to misleading conclusions.  But this is actually rather rare.  Simple observations get a bad name because people use them to form beliefs that cannot possibly be supported by simple observations, and these turn out to be wrong, but no one ever explains why they were wrong that time. 

For example, any time we see a story about someone who used smokeless tobacco and get oral cancer, he declares that the exposure caused the disease.  But cancer has various invisible causes and we have no idea when it started, and thus it is never possible to say for certain what caused a particular case (except for those few cancers that basically only have one cause).  However, we should not be surprised that people who really do not understand health science (which in many cases includes the health engineers – physicians – who generally understand little about such matters, though they will not admit to it) sometimes make this mistake.  But just because this is a mistake does not mean that all such observations are a mistake.  Not all diseases are epistemically similar to cancer. 

If you get a serious bruise on your head and a severe headache the very minute you experienced a trauma, then you will naturally conclude that the trauma caused the headache.  You would have good reason for thinking that, and would very likely be right (and should, by the way, seek medical attention because you might have a concussion or hemorrhage that needs to be treated – physicians are great at what they do well).  Why is this case different?  The main difference is that you were able to see the incidence of the disease, the minute it actually began.  While at any given time a fair number of people have headaches and bruises on their heads (that is the measure of prevalence), the chance of it beginning in any given minute is extremely small.  So the probability of "used smokeless tobacco" and "got cancer" co-occurring in a lifetime basically can be derived multiplying the portion of the population that experiences one by the portion that experiences the other.  If 5% of the population uses and 1% get oral cancer then 0.05% have both.  While that sounds small, it means that out of every million people, 500 would have both by pure coincidence – a lot of people.  And among those who get oral cancer, and thus are seeking an explanation, no math is required: 5% of them used smokeless tobacc.  So it is very easy to naively blame a coincidence on the exposure.

By contrast, even if you get a head bruise once a month, the chance of it happening in a particular minute is about 0.0002%.  The chance of experiencing a major trauma that minute, even for the most unlucky of us, is much smaller than that.  Therefore, coincidence is so incredible unlikely that it is hardly worth considering.  The other key difference implicit in this analysis is that you can easily observe headache/bruise (it requires no instruments or expertise to diagnose) and trauma, so you know when they happened and can basically see the cause and effect.   But it is impossible to almost see the cause and effect when it comes to cancer.  Indeed, the only reason someone would think to draw the conclusion is because they were told (falsely) that there is scientific evidence that smokeless tobacco causes a substantial risk of oral cancer.  What they might think is a common sense direct observation is, it turns out, an based on complicated (false) "book knowledge".

The point of this?  I have just explained to you why, even though ordinary people with ordinary powers of reasoning are quite capable of drawing sensible cause-effect conclusions based on a single observation for many phenomena, like trauma, the observation "I used, I got cancer, therefore it caused it" is always baseless, even if it might have seemed to be common sense to someone who was making it.  To ask someone to not believe their intuition or the simplest possible interpretation of what they think they can "see" requires some explanation (and something buried in jargon or that amounts to "that is just how we do science" is not an explanation).  So I just provided it.

So, to come back to the second sentence of this post, if people with complicated scientific methods want us to believe that the apparently-convincing observations we have made are actually leading to incorrect conclusions, they should be able to tell us why our observations are wrong. 

This works at many levels.  Consider the first study that claimed to show that smokeless tobacco causes pancreatic cancer (a claim which, by the way, has been thoroughly debunked; not that it need to be: I and a few others pointed out all along that it was never supported by the evidence despite many people believing or pretending to believe the claim until recently).  In that study, the authors carefully avoided ever reporting the most basic numbers from their data, the "2-by-2 table" of the number of subjects who had the disease and exposure.  If they had done that, they would have had to admit that this showed a protective effect – that is, those who got cancer were less likely to have used smokeless tobacco than those who did not get cancer.  So how did they twist this into a claim that the risk was elevated for smokeless tobacco users?  By using a complicated statistical model to "control" for a group of other variables that should not have had the huge effect (doubling the relative risk, changing it from a protective association to a positive association).  Of course there are cases where controlling for confounders improves your estimate, but this was not likely to be one of them, and some of these particular confounders really should have pushed the result further the other way, while other should not have affected it at all and so should not have been in the model without some explanation (which was absent).  Indeed, you probably do not know this (because most people writing epidemiology apparently do not know it), but controlling for a variable that we have no reason to believe to be a real confounder is just as likely to make the final result further away from the truth as it is to move the result closer to the truth.

If you want to read the full paper I wrote on this point – which the relevant journals refused to publish – it is here (and which, I am pleased to say, shows up higher on a Google search than does the original article!).  It was undoubtedly rejected because it dared question a claim that tobacco causes disease, but also it was quite clear that the editors and reviewers did not understand the point that I am making here.  They basically said "we always control for things, so that must be the right way to do it" (this is a manifestation of epidemiology journals being run by the techs rather than the scientists).  Clearly they did not know enough epidemiology to understand the last sentence of the previous paragraph.  But they also did not understand the principle that you should not pretend the simple data does not exist, and you better have a damn good explanation why we should accept the arcane complicated version that contradicts the obvious simple interpretation of the evidence.  "That is just how we chose to do it" is not a good explanation.

This brings me to the observation that animated me into writing this.  As I have noted here before, there is strong evidence that giant electricity-generating wind turbines create a lot of noise and light flicker and cause serious health problems for some of the people living near them.  Many nearby residents have reported a pattern of diseases including insomnia, headaches, mood disorders, and others.  Like the case of the trauma (and unlike the cancer) the noise and these diseases are phenomena that can be observed without any fancy technology, and people know when the incidence is.  (There are some other good reasons to believe these reports that I will not go into here.)  What is extremely troubling is that the turbine industry and its supporters in government and (perhaps worst -- at least from the perspective of having any professional pride) the ostensible experts they hire to write reports for them, consistently try to argue arcane points about the few systematic studies that exist while ostentatiously ignoring the huge amount of data from simple observations and common sense interpretations of cause and effect.  Their game is to basically claim that these simple obvious human observations are not some fancy official sciencey stuff, and therefore do not count at all.

This would be bad enough if this were about a technical point like how best to win at internet poker, another case where I would believe concurring observations of hundreds of people with experience even absent any formal study.  But the reports in question are often of people having their lives ruined and being driven from their homes.  Even setting aside the technical reasons why these reports are particularly convincing, it should be clear that anyone contending that such effects never happen (as the industry proponents do) owes the world an explanation that convincingly shows why so many people are wrong when they report that it has happened to them.  I cannot understand why any policy maker who was concerned with the truth and people's welfare would not demand that first and foremost, before even allowing any further arguments that are based on arcane technical details.

Bottom line:  If someone is trying to tell you that the common observations and intuition about something are leading to an incorrect scientific conclusion, give them a chance to make the case.  Be scientific and keep an open mind, because such incorrect inference does indeed happen and the formal science knows better.  The error about over-concluding from a coincidental case of cancer is an example of common sense failing.  And sometimes a complicated statistical model really does make so much more sense than the simple comparison.  But most of the time in this world, the simple interpretation is right and what "everyone knows" is actually true – otherwise Wikipedia would not function at all.  So if whoever is telling you not to believe your own eyes can only tell you, "my complicated science shows otherwise" but not why the common sense (or the simply 2-by-2 table) should be considered misleading and why the complicated science corrects the problem, there is a good chance that "everyone" is right after all.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.