13 June 2011

Unhealthful News 164 - Taking scientific advice requires some scientific skill

On a slightly tangential note, but a quite critical point when it comes to health policy, the New York Times reported today that a quarter of US state legislators lack a four-year university degree.  The report does not tell us how many more majored in highly non-liberal-arts and non-science fields (accounting, pre-law, business management, fraternity parties, etc.) that provide no basis for assessing scientific claims.  But it is these legislators that make a lot of scientific, including health policy, decisions on behalf of people.

Yes, of course, a degree does not always correspond to knowledge and ability.  Bill Gates did fine without a bachelor's degree, though some of the priorities of his foundation seem to suffer problems similar to those found in state health policy.  And on the other side:
“I don’t think it’s imperative that you have a college degree to be effective,” said Mike Fletcher, a retired state trooper elected to the Arkansas Senate last year. “I think the most important thing is to have common sense.”
But there is a strong correlation between ability to make particular scientific assessments and the ability to finish a degree in a non-rote field.  Common sense does not go very far when trying to figure out if the benefits outweigh the costs of making a meningitis vaccine mandatory.  It does not even seem to help retired cops make sensible decisions about drug policy.

The obvious response is that they have people who figure out such things for them.  The problem is that the further away someone is from understanding a scientific matter themselves, the more likely they are to believe someone who is not giving them accurate information, either out of ignorance or a hidden agenda.

You have to know something to even know who you should believe.

A policy maker who has absolutely no clue about scientific epistemology will depend on Wikipedia or 24-year-old aides (who will go to Wikipedia) to tell them what to think.  Even if it is not literally Wikipedia, it is some other source at that level, like news reporters or a local advocacy group, that interprets science at the level of what shows up in the conclusion sentence of research papers abstracts.  As readers of this blog know, such claims are not reliable in health science.  Indeed, Wikipedia and most news outlets intentionally cultivate this kind of uncritical-acceptance-based behavior. 

On a few occasions I have tried to correct errors in Wikipedia where something was once widely believed to be true, but was now shown to not be true (and, I think in all those cases, was never actually based on evidence – it was just one of those conventional wisdom problems).  But even if I made the change in terms of "it was once believed that but now it has been shown/established that…. [source]", the editor who controlled the page quickly changed it back.  I was informed, in effect, that most of what is out there on the web still presents the old view and does not acknowledge a controversy, and since science is democratic in the Wikipedia world, the old versions stands.  Given that experience I choose to focus on forums where most readers know enough to recognize at least the basic credibility of what I argue, even if it is contrary to what they thought they knew and what others claim.  My project in this blog is to figure out how to help people skip a few steps on this knowledge ladder, but that does not help much for those who do not even seek that knowledge.

The problem with knowledge at the news or Wikipedia level is that the people compiling it do not know who they should believe, or even how to distinguish when there is legitimate controversy.  Wikipedia is truly great at what other non-expert encyclopedias were always quite good at, getting non-controversial factoids correct, and it dramatically broadens the coverage (from "when did Lincoln deliver the Gettysburg Address?" to "who were the finalists in American Idol").  It is pretty good with scientific controversies that do not have much of a worldly political angle ("when did humans arrive in the New World?" "what is the definition of 'species'?").  But it and newspapers fail when it comes to current controversies in active politicized sciences that public officials need to wade into.


The Wikipedia-level authors get their information from anyone who can publish an authoritative-seeming paper.  This gets pretty close to maximum current expertise in many sciences, where people authoring study reports mostly know what they are doing and generally know who look to when they do not.  There might be disagreement over ultimate conclusions and best methods, but not complete ignorance about best methods or who the leading thinkers are.  But this is not the case in health sciences.  Most people writing the epidemiology papers, the sources of the summary "knowledge" that is used in policy, have no idea what constitutes expert thinking in epidemiology.  Thu there is yet another layer of not knowing enough to really know that makes uneducated faith in experts and "common sense" that much less likely to identify good advice.

For example, on the question of whether there are health effects from industrial wind turbines, the government of Ontario, Canada (a major hotspot in that fight) seems to put a lot of stock in the thin report on the subject by their Chief Medical Officer of Health.  (CMOH is a strange Canadian institution wherein a physician administrator type is always the province's chief public health advisor.)  I was reminded of this a couple of days ago when I saw a newspaper cite that report as if it were authoritative.  The problem is that the CMOH and her staff were in way over their heads in writing the report, and not only did not know what constitutes the available evidence, but did not know whose analysis to believe.

Funny story:  I was cross-examined by a lawyer representing Ontario at a proceeding where I had presented testimony that the CMOH report was a joke, albeit in a less combative and more detailed way, of course.  She asked me something along the lines of, "since you know so much, did you ever contact the CMOH to try to provide useful input into the writing of the document?"  It boggles the mind.  I expect it would require more search and processing power than Google has to be able to identify any time someone is writing a supposedly expert report that is beyond their capability, and then direct the real experts to proactively contribute to it.  It seems more promising for report writers to track down the experts and ask for input.  Of course, they have to know who to even ask.

The situation in Ontario is that the lawmakers trust an authoritative sounding government official who knows more than they do but is far from an expert in science, and in turn she does not know who to believe or how to interpret it.  Perhaps those who she believed know who are really expert, but they have shown no evidence of that.  I am not sure whether Ontario legislators follow the same pattern of education as Americans, but it really would not take much scientific understanding, when coupled with a bit of partisan education (lobbying) in the subject matter, to realize that the CMOH report is worthless.  But if the local lawmakers do not have the skills to understand (when given some information and advice about thinking in the spirit of what I do in this blog) when their "experts" are giving them bad information, it does not really help much that true expertise exists, merely a few layers away.

2 comments:

  1. In light of this post, I find it ironic that you link to Paul Krugman as "critical analysis". Google "Krugman in Wonderland" to understand why I say this.

    ReplyDelete
  2. E, Might I suggest you take a closer look at what Krugman himself writes -- an impressive quantity of useful analysis every week. I obviously risk being wrong about this since I have no idea who is behind your single letter, but I am pretty confident I have read more Krugman than you and I strongly suspect I have probably read more from the critics of Krugman also. I have the advantage of having independent expertise in the subject matter of much of what he rights about, but I think I can see past that to make this observation:

    Krugman is a near-perfect example of what I write about in this blog, about how you can recognize who is telling the truth even if you cannot judge the subject matter. To a degree that is quite amazing (though I suppose it is motivating to know that millions of people including some of those in charge pay attention to what you say), he identifies the critiques of / attack on his arguments and responds to them. He points out their internal inconsistencies in ways that, I think, people who are interested enough to be reading the material can understand, even without further knowledge. He shows the data that points out that they are wrong. He identifies the analytic mistakes they are making and explains why they are mistakes. On the few occasions that they come back with a "yes, but..." or a rebuttal, he responds to that too.

    Meanwhile, as far as I can tell, almost all of his detractors simply ignore his replies to their points. They just repeat the same claims that he has already debunked. Sure, they can come up with a few minor points that he has never had time to respond to, but for the most part they are trafficking in points that he has answered, counting on the reader to be unaware of that fact.

    If you are familiar with my writing, you will recognize this as the tactic used by anti-tobacco extremists in response to harm reduction: They make attacks which are debunked, but they never try to defend them or respond to the pro-HR arguments. Instead, they just keep repeating the same claims, counting on no one to be reading critically enough to realize that those points have already been debunked.

    I know that this post's topic focused on knowing enough to know who to believe. I am convinced that this does not require knowing all of the subject matter -- thus my own efforts. But it is definitely necessary to be able to see the difference between someone who offering a scientific argument and responding to critiques of the theory, and someone who is simply making statements that sound good on their face without acknowledging what others have said. I am convinced that anyone who is educated and engaged enough to be reading serious punditry can learn that about THR or about tax/monetary/budget policy.

    ReplyDelete

Note: Only a member of this blog may post a comment.