A couple of people asked me about an allusion I made to clinical trial stopping rules yesterday – rules which are based on a very weak understanding of statistics and epistemology, and thus, arguably, weak ethics – which I said was a story for another day. But since there is nothing I particularly want to cover in today's health news, I will let today be that day I start the explanation. (For those looking for a most standard Unhealthful News style analysis, you can find it in the second part of this post where I link to a couple of other bloggers who did that for recent UK statistics about alcohol and hospital admissions.) Besides, whenever I move something to the category "to do later" it joins such a long list that it is often lost – note that this observation should serve as a hint to those of you who have asked me to analyze something and I said I would get to it: please ask again if you are still interested! (If you do not want to post a comment, my gmail I use for work is cvphilo.)
Clinical trials (a prettied-up name for medical or health experiments conducted on people) which follow the study subjects for a long period of time (e.g., they give one group a drug they hope will prevent heart attacks and the other group a placebo, and then watch them for years to count heart attacks) often have a stopping rule. Such rules basically say that someone will look at the accumulated data periodically, rather than waiting until the planned end, to make sure that it does not already clearly show one group is doing better (in terms of the main outcome of the study and major side effects). If the data support the claim that one group is apparently suffering inferior health outcomes because of their treatment, the argument goes, then it would be unethical to continue the trial and thus continue to assign them to the inferior regimen. Oh, except those who recite the textbook justification for the stopping rules would probably phrase that as something like "if one treatment is clearly inferior" rather than the much longer version of the conditional I wrote; therein lies much of the problem.
Backing up a couple of steps, to understand the problem it is useful to realize that most trials involve assigning some people to a treatment that is believed to be inferior. Realizing this is not necessary for figuring out a statistically optimal stopping rule, but it does immediately get rid of a persistent ethical fantasy that interferes with good analysis. A typical trial involves comparing some new treatment, preventative medicine, or public health intervention to whatever is currently being done. Almost always this is because those who initiated, funded, and approved of the research believe that the new regimen will produce better outcomes than the old one. There are other scenarios too, of course, such as comparing two existing competing regimens, but the point is that those with the greatest expertise almost always have a belief about which is better. If they had to decide, right now, which would be used for the next few decades, ignoring all future information from the trial or any other source, they would be able to make a decision. More realistically, if they had to decide which regimen to follow/use for themselves, or their parent or child, right now (because what we might learn over the next ten years cannot aid in today's decision), they would be able to make a decision. Just because we are not sure which regimen is better (or how much better), and thus want to do research to become more sure, does not mean that there is not a prevailing expert opinion.
Many people who fancy themselves ethicists (and many more who just want to do trials without feeling guilty about it) take refuge in a fantasy concept called "equipoise". The term (which is actually a rather odd jargonistic adoption of that word – not that it is used in conversation anyway) is used to claim that when we do a trial, we are exactly balanced in our beliefs about which regimen produces better outcomes. Obviously this might be true on rare occasions (though incredibly rare – we are talking about perfect balance here). But most of the time the user of the word is confusing uncertainty with complete ignorance. That is, someone obviously feels inadequately certain about which regimen is better, but this is not the same as having no belief at all. Keep in mind that we are talking about the experts here, not random people or policy makers. They know what the existing evidence shows and, if forced to make a decision right now about which regimen to assign to a close relative who is in the target population, it would be an incredibly rare case where they were happy to flip a coin.
Every now and then, there is a case of such incredible ignorance that no one has any guess as to whether a treatment will help or hurt (e.g., this condition is always fatal in a few weeks, so let's just start doing whatever we can think of – the results will be pretty random, but we have nothing to lose), and occasionally a situation is so complex that it starts bordering on chaos theory (e.g., what will the new cigarette plain packaging rule in Australia do? nothing? discourage smoking? expand the black market? provoke a political backlash? reinstate smoking's role as a source of rebellion?). But such examples are extremely rare.
It is also sometimes the case that no one is being assigned to an option inferior to their best available option had they not been in the trial. For example, offering a promising new drug – or even just condoms and education – for people at high risk of HIV in Africa, comparing them to a group that does not get the intervention, may hurt no one. If the researchers only had enough budget to give the treatment to a limited group of people, that group is helped (according to our prior belief) while the other group is right where they otherwise would have been. Their lack of access to the better regimen is due to their inability to afford a better lot in life, who while they are not helped, they are in no way hindered by being in the control arm of the trial. (Bizarrely, it is situations like these that often provoke greater ethical objections than cases where people are assigned to a believed-inferior regiment when they could afford to buy either regimen for themselves, but that is yet another story of the confused world of "health ethics".) Another example is the study I wrote about recently in which some smokers are given snus while others are not; setting aside all that is wrongheaded about the approach of this study, it does have the advantage that one group benefits (at least they get some free product they can resell) and the other is exactly where they would have been had there been no study. There is a similar circumstance in which the trial only assigns people to the believed-better treatment, with the plan of comparing them to the known outcomes for people not getting that treatment. This is similar to having a control group that just gets the standard treatment, though people who do trials do not like this approach because the data is harder to analyze (they have to consider the same challenges that exist for observational data). But all of these cases, while certainly not rare, are unusual.
I will reiterate one point here, in case it is not yet clear (one of the challenges in turning seminar material into written essays is I get no realtime feedback, so I cannot be sure if I have not made something clear): We are never sure about which of the regimens is better, so we might be wrong. Handing out the condoms might actually increase HIV transmission; we are pretty sure that is not the case, but it is possible we are wrong. Or niacin might not actually prevent any heart attacks, even though it seems like it should. But there is still a belief about what is better when we start.
The bottom line, then, is that most trials involve assigning some people to do something that is believed to produce inferior health outcomes. Why is this ok? It is because it is for the greater good. We want to be more sure about what is the better regimen so we can give better treatment/advice to thousands or millions of people, and so judge that it is ethical to let a few hundred informed volunteers follow the believed-inferior option to do so. Also we usually want to measure how much better the better regimen is, perhaps because it costs more and we want to decide if it is worth the cost, because we want to be able to compare it to new competing regimens that might emerge in the future, or perhaps just out of curiosity.
Asking people to suffer for what is declared to be the greater good is, of course, not an unusual act. Every time someone rights a check to a humanitarian charity, they are doing this, and governments force such choices (taxation, zoning and eminent domain, conscription). But the people who chatter about medical ethics, and make the rules about trials, like to pretend that they are not doing that. From that pretense comes the stopping rules, which I realize I have not mentioned yet. But this is a complex subject and requires some foundations. I will end that for today and continue tomorrow.
On a completely unrelated note, for those of you who want some regular Unhealthful News and do not read Chris Snowdon (I know a lot of you do), check out what he wrote, based on what <Nigel Hawkes wrote about a recent UK report that hospital admissions due to alcohol consumption have skyrocketed. I will not repeat their analysis and do not have much to add to it. The simple summary is (a) the claim makes no sense because dangerous drinking has decreased a lot, as has alcohol-caused mortality, and (b) it is obvious that the apparent increase was due to a change in the way the counting was done.
It is pretty clear that the government knew they were making a misleading claim when they released this information. Their own reports recognized the true observations, but their press release about their new estimate did not. The National Health Service is on a crusade to vilify health-affecting behaviors they do not approve of. But governments lie – we know that. While the commonality of that does not make it any less disgraceful, the greater disgrace belongs to the press that is supposed to resist government lies, not transcribe them. But, as Hawkes and Snowdon predicted (they wrote their posts right after the report came out, before the news cycle), the press picked up, with hundreds of articles that report the misleading claims and seem to completely lack skepticism (examples here here here here). This is not a difficult error to catch, either by running a Google search for the blogs that had already debunked the claim before the stories ran, or simply by asking "hey, we know that heavy drinking is way down, so how can this possibly be true?"
I suppose it is not too surprising that the average reader has no idea what stopping rules do when they read one was employed, let alone what is wrong with them, when the health reporters cannot even do simply arithmetic or fact checking.
Critiques of the Lancet's 'no safe level' study - A study appeared in the *Lancet* last August which claimed to have virtually erased the J-Curve from alcohol epidemiology. The authors used an unconvention...
15 hours ago