It seems that Rick Santorum actually won the Iowa primary for the Republican nomination for president. (And whatever you might think about him, at least he
scored ok on his concern for animal welfare; on concern for people, well, not so good.) This is not a health news story, but it is a great example of dangerous innumeracy in the press, one that illustrates why decent science reporting is so rare.
I happened to catch a few minutes of Fox News yesterday morning after this broke, though I am sure this was not unique to them. There was a discussion among the reporters (or is that "reporters"?), which included near apoplexy about how terrible it was that the revised vote estimate changed the "winner". Rather than Santorum losing to Romney by a single-digit number of votes, as originally estimated, he actually won by about 30 out of the 120,000 cast. The press were blasting the clerks who count and record the votes and calling for an investigation about how an error could be made in such an important process. Granted, Fox News is more intent on stirring controversy out of nothing than the other networks, but I would be surprised if any of them offered a realistic perspective.
Notice that in the previous paragraph I used the word "estimate" rather than the typical "count". This was to illustrate that the process of figuring out how many votes were cast is a complex combination of human actions, not some kind of Revealed Truth or flawless mechanistic process. It should be obvious that there are many ways that errors can be made in counting, recording, and compiling over 100,000 observations.
(Note: In a deeper sense, it is not entirely obvious that there even is a True value for the number of votes. There are probably genuine ambiguities in the process. From that perspective the count does not reveal the truth so much as create it. But we can set aside that level of analysis and just stick to the version where we believe there is a truth, but errors happen.)
Another cut at toting up, or an audit, will almost inevitably yield a different number. Even some things that seem like "just counting" are attempts to measure complicated worldly phenomena using created methodologies (a combination of actions that is called "science"), and so involve scientific error, even if there is not the random sampling that some people think is the only source of error in science. (I wrote
a paper about quantifying error in the absence of random sampling years ago, which was well received and is easy to understand, so you might be interested. It did not change the world of course -- it was widely read by people who probably already agreed with the main points, but who understandably do not want to go out of their way to actually act according to that knowledge.)
What the angry reporters were oblivious to is the fact that the most serious error was theirs, not the Iowa vote counters'. By presenting as it it mattered that the original estimate put Romney ahead by 8 votes rather than behind by a few, they are the ones who made it a problem that a revision changed that. That razor's edge only seemed to matter to the press because they are really only very good at reporting on sports, and so try to treat everything else as if it were sports. Iowa was a tie for all practical purposes. It was a low-stakes vote in a little state, but matters because it is a show of strength that might predict or influence the big votes later. In that context, a few votes more or less obviously do not matter.
If this were a winner-take-all process, then there would need to be a
legal definition of who won, and then there would be genuine room for complaint if later audits showed that it was not assessed properly (as with Bush v. Gore, Florida). But that is not the case, so it was just the reporting itself that created the notion that the "winner" mattered.
The error that the reporters made is confusing a question like who won a game of football or tic-tac-toe or chess (based on rules, without error in the process unless someone is truly subverting the system) with a question of who won a war or who is more popular, which is sometimes obvious, but sometimes rather more complicated to assess, and involves no bright lines. The reporters were treating the Iowa vote was a football match, and in a football match if the initial declaration of who won is later reversed then then it both changes everything and may genuinely result from some unacceptably serious problem in the process.
What does this have to do with health science and science reporting more generally? Well if the reporters cannot even visualize how 0.01% errors cannot creep into a process of gathering data about a process they understand -- counting how many people moved to which side of the room to support a particular candidate in local community centers etc. across the state, and then gathering all of these notes together without losing any, and then adding them up without keying something in wrong -- then there is no way they can hope to understand how measurement, sampling, modeling choices, and countless other points of decision and possible goofs, along with confounding and faulty instruments (to say nothing of intentional political manipulation), introduce errors into scientific estimates.
Interestingly, reporters will occasionally use a phrase like "no statistical difference" or "statistical tie", but presumably only because it is fed to them. I suspect they have no idea that it means "the limits of our analytic abilities are such that this could be an exact tie, or it might go a bit in either direction, and we cannot tell". But reporters would never be willing to accept, "the vote in Iowa was a statistical tie" because they think that uncertainty only comes from some magical force called "statistics" (which I suspect most of them, if they think about it at all, think refers only to the concept of random sampling error).
Similarly, reporters think that "a relative risk of 1.92" is more scientific than "it doubles the risk", even though the latter is almost certainly a much better description of what we know because it does not pretend to knowledge that is much more precise than what we actually have. I cannot claim to be free of guilt in contributing to this. E.g., having calculated the point estimate that the epidemiology suggests that smokeless tobacco has about .01 of the risk from smoking, I often say "99% less harmful" and that estimate been picked up as the conventional wisdom. But what I really found was that there was not compelling evidence of any risk at all, that some not-unreasonable assumptions gave numbers in the range of 1% or maybe 2% of that from smoking, and (most important, really) there was no remotely plausible way to get a figure as high as 5%.
The proper statement of the risk would be most of the information in the previous sentence. But most non-scientists (e.g., science reporters) would interpret a precise-sounding assertion "it is 99% less harmful" as being more scientific than the rougher statement that is actually more accurate. They treat science as if it is some magical process that either is silent on a question or tells us an exact quantitative answer. There is a tendency for people to think that any method of inquiry that they do not personally understand must be magically perfect. But "method of inquiry they do not understand" is most everything; after all, reporters seem to not even understand the concept of a bunch of people writing down some counts and then trying to gather them all for tallying. If they did understand that, they would not be shocked to hear about 0.01% error.
But there is a linguistic trick here, 99% less harmful implies harm. Interpretation is subjective. And of course safe is a taboo FDA word. So when FOX says the majority of polls show the majority of people see Newt in a more negative view than other candidates- what a great sea of no information that makes Newt negative. Verbage is everything. p < 0.05.
ReplyDeleteI am not sure I entirely understand your point or points. I think you are concerned that the shorthand phrasing tends to obscure the possibility -- that I note in the full version of the statement -- that we are not sure there is any risk at all. I would agree, though the idea is to provide an unbiased (as that term is defined in statistics) summary, so it it also obscures the possibility that the risk might be 2% or 3% of that from smoking.
DeleteOf course, there is a well-documented tendency for people to homogenize risks and treat zero risk as fundamentally different, so that they consider the difference between 1% and 0% to be much greater than 1% versus 3%, even though the latter is obviously twice as great. So by the scale of common psychology, though not an actual scientific scale, 99% is biased, and even 99.9% would be, because differences among small risks are ignored, while the difference between 0 and anything is exaggerated.
On the other hand, since everything poses some risk, the chance of the risk being zero is vanishingly small. It could really only happen if they benefits exactly outweighed the costs (if we are talking about just mortality risk, those benefits would be things like preventing traffic and other accidents, treating deadly depression, and the indirect effects on longevity of greater productivity). Of course, part of the goal is to communicate something that people will believe and that does not play into the liars' propaganda, so it is easiest to just avoid noting that there might be zero risk.
That said, when I am talking to people that I know are honestly seeking the truth and are sufficiently numerate, I have no problem just saying "there is vanishingly small risk" or "the risk from switching is the same as from quitting cold turkey" or perhaps even the shorthand "safe". But for way too many people, hearing something like that, even though it is scientifically more defensible than saying the seemingly precise 99% (let alone the seemingly more precise 98% that some people say), will be confusing or will be used dishonestly. So basically I have to reserve frank scientific information for personal conversations, and cannot use it when speaking to groups or writing.