A somewhat superficial one today because I spend hours working on the THR weekly reading (which has some interesting bits of how to interpret science, so even those of you who are not so interested in the topic might find it interesting reading). I might end up writing something about the current flurry of press about whether mobile/cell phones might cause brain cancer. Probably the most interesting aspect of that is how those who are skeptical of health scares get so apoplectic and angry – I fear they would start throwing punches if the debate were face-to-face. Today, though, I will just comment on the instigator of the current furor, the WHO's International Agency for Research on Cancer (IARC).
The biggest problem with IARC is that it exists. More precisely, the problem is that it is a research think tank that is no better or worse than most think tanks or university departments, but that its status implies that it is something more than this. In its role as arbiter of what causes cancer, more than anything else, it is basically a convener of committees. Committees make lousy scientists. But when the committees finish their reports, they become The Word of IARC, and the press and pundits take them way too seriously. Even worse, so do governments of countries that do not have enough research infrastructure of their own to form educated opinions.
IARC issues its proclamations by convening a small subset of the researchers working on the topic, usually missing most of the experts who could contribute the best analysis (that is true for all cases I have been able to judge). They do not include experts on how to make sense of a complex body of evidence, because they do not even seem to understand that this is an issue in itself. This semi-random group of reasonably knowledgeable people come to whatever conclusion they can hash out, and the result becomes Official Truth about the carcinogenicity of an exposure. But the quality of analysis is, at best, about the same as you would get from the faculty and students of a decent academic department that was studying the subject in question (and far less good than many focused learned studies – for example, there are quite a few studies of smokeless tobacco that are far better than IARC's).
That leads to the second major problem: IARC is not a neutral think-tank. WHO has turned into an activist organization on many topics, and IARC carries the water for them. I referred to the IARC committees as semi-random – the semi part reflects the fact that it is easy to choose a committee that will come to the conclusions that those in power want, and IARC does just that.
The third problem, exemplified in the mobile phone controversy, is that the form of the judgments that IARC makes are remarkably close to uninformative. I realize that pointing out that the opinions are far from the best possible expert evaluation, and are quite politicized, but also are reported in an uninformative way, is kind of like the old joke: the food at a restaurant was terrible, and even worse, the portions were too small. But the lousy rules under which the committees operate probably make their work worse, and they certainly hide how weak that work often is.
One of the first things learned by anyone studying epidemiology…. You know, it is never accurate to say that, so let me start again. One of the first things that someone learns in epidemiology, if they have one of the minority of epidemiology teachers that knows what they are doing, is that it is a science of quantification, not of yes-vs-no, and of circumstance. In physics, if a particular theorized particle is shown to have been created in a particle accelerator just once, it is a meaningful discovery. But in epidemiology (rough definition: the science of quantifying what causes diseases in humans), it matters what population you are talking about, at what point in time, and the details of an exposure, and most of all, how big the effect apparently is.
IARC looks at none of these. In short, the premise under which IARC operates would not earn a passing grade on a second-semester epidemiology exam (again, with the caveat that the professor was actually competent to be teaching second-semester epidemiology in the 21st century).
If you noticed the news, the official declaration was that using a mobile phone is "possibly carcinogenic for humans". It is a symptom of the underlying problem that this is such a dumb monicker. No, not the "for humans" part – we should be concerned about dogs using cell phones too. The terminology is dumb because you know what else is possibly carcinogenic for humans? Everything! It turns out that the meanings of the words does not really matter because they are just levels on a five-point scale. It is kind of like the US terrorism threat level, and is sometimes it is even portrayed with the same color scheme. Three of the rankings are better labeled than "possibly" (definitely carcinogenic, "probably", "probably not"), though one of them is actually worse ("unclassifiable"). What is worse than the names used for five point scale, purely a cosmetic problem, is the absence from such a simplistic measure of most everything that is important in epidemiology: What is the magnitude of the risk? What level of exposure are we talking about? Indeed, exactly what exposure are we talking about? And what population are we talking about, because something that caused a noticeable risk of cancer in, say Inuit people in 1950, might not cause measurable risk for people who eat enough vegetables and have 2010 medical care.
To explain the importance of this, if there had been a couple of epidemiology studies that showed that very intense use of one particular type of mobile phone, perhaps one that does not even exist anymore, seemed to cause a "statistically significant" increase in the risk for one rare cancer, even at a trivial level, then they would have declared mobiles a probable or certain carcinogen. Actually, I take that back – they would be supposed to declare that, but depending on the politics of this particular committee and what WHO wants the verdict to be, they might not have said this.
To further illustrate the fundamental problem with this system, IARC declared smokeless tobacco to be a certain human carcinogen. Though they wrote a very thick report, the conclusion was based on only five studies of oral cancer, one of an archaic US product that perhaps no one uses anymore, and the others of the hodgepodge of products (that are not entirely or even mostly tobacco) used in India by a population with one of the world's most usual distributions of oral cancer. The committee was so politically stacked that if these studies did not exist, they would probably have dug up another excuse to reach the same conclusion. But their opinion is still used to imply that modern Western products definitely cause cancer, just like cigarettes, even though the evidence does not support that. Because the IARC method is "hunt around for evidence of any version on the broad category of exposures apparently causing cause one cancer in some population", there is really very little useful information. But it makes great propaganda and an effective way to trick lazy health reporters.
In reality, though, the ratings clearly do not even represent the any-vs-none scale that is claimed. They really represent unspecified mix of a lot of vague considerations, as determined ad hoc by a committee. Consider: It is safe to conclude that any exposure that subjects the human body to any of many stresses (exposure to a carcinogenic chemical, even if very low doses and even if in an otherwise healthy food; heating the brain with low-level microwaves; etc.) has caused or will cause, if it happens a lot, at least one cancer. So saying something "probably does not" cause cancer must mean that there is no conceivable way that it could (the existence of Neptune's moons probably will never cause a case of cancer – at least until someone goes there). But in reality a committee reporting that assessment is not making the impossible claim that there is evidence that the exposure will probably never cause a single caner. Rather, they being fuzzy and probably really are saying that the risk is so low that we will never be able to measure it, or perhaps just that we cannot measure it in any population we currently know of. So really IARCs ratings are a random mash-up of certainty and quantity, which includes an unknown degree of considering extreme exposure scenarios rather than normal ones. All of this, along with what we people really should want to know, is hidden by the simplistic form that the summary reporting takes.
The US terrorism alert level was a running joke from the start, and a purely political concept. It was embarrassing how long it took them to cancel it. Sadly, I doubt we will soon see a similar cancellation of the IARC system. This is in part because the WHO is even less good than the US at getting rid of bad ideas, but also because few people seem to realize it is just as much a joke.
A bad time for a bad obesity prediction
-
All you have to do to produce an obesity forecast that will be published in
a medical journal is draw a straight line through the recent past and push
it...
22 hours ago
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.