20 October 2010

Another foray into transport safety (car seats) and misleading statistics

This is apropos to nothing that I really work on, but that's just me.  If I were not inclined to wander off and spend a few hours on curiosity projects like this I am sure I would not do nearly as good a job on my real work.

Numerous times over the last week, I have run into the claim that three out of four automobile child restraint systems (car safety seats) are installed incorrectly.  If you run a web search on some combination of those words you can see how many times it appears.  It seems to be so accepted as a fact that it shows up in news feature headlines, sometimes even when the statistic does not appear in the story itself.  Sometimes the statistic is just asserted as fact or attributed to "surveys" (which is pretty funny; how exactly do you design a survey question that effectively measures whether someone is accidentally doing something incorrectly and does not realize it).  I suspect this is enough for most readers to not hesitate to accept the assertion.  Slightly more skeptical readers presumably are convinced with the statistic is attributed to the U.S. National Highway Traffic Safety Administration (NHTSA)

However, anyone who stops to think about it, however, has to ask the same question they should ask about any number of other claims by authority figures: "How can they possibly know that?"

My guess was that it was, at best, based on a single study which probably did not really show what is being claimed, and decided to check that hypothesis.  It turned out to be rather more difficult to trace than I expected.  NHTSA does indeed make the statement quite prominently on their web page, but without linking the claim directly to any scientific evidence.  I suspect this will not come as a shock to most readers of this blog who are familiar with the U.S. government agencies like the CDC and NIH putting out empirical claims -- often misleading and sometimes out-and-out false -- in the form of catchy illustrated propaganda text that would be the envy of any snake oil salesman.

After looking through several NHTSA pages and reports, it became fairly clear that it must trace back to one particular report: DOT HS 809 671, January 2004, "Misuse of Child Restraints".  I was forced to conclude "fairly clear" because there was never once a clear reference from the 3/4 statistic to that report (it just seemed to show up as a reference when the statistic was claimed) and all the links to the report were 404 -- the report had mysteriously disappeared from the NHTSA website.  NHTSA seems to base their knowledge about the topic on reports by one of their researchers (Lawrence E. Decina) over the last 15 years or so, including that missing report.  Fortunately I found a third-party archived copy from when the URL was live (many years ago), and confirmed that that study seemed to be the source of the statistic.

So what did the study actually show?  I have to admit I was a bit surprised that it seemed to be based on a fairly unbiased sample; the study population was probably a bit lower SES than the American average, but was not -- as I expected -- based on people who were self selected as doubting their installation (i.e., it was not based on people who showed up to have their installation inspected by experts).  Instead, drivers were recruited as they entered various parking lots in 2002.  This contrasts with the bias from a strangely similar finding reported recently here in Pennsylvania, which was based on modern data people who actively sought help from the police to check their installations (a bizarre coincidence that they also found "three out of four" given that they were measuring something quite different).

However, there was still a major problem in terms of how the results were reported.  The NHTSA study's definition of a faulty installation was rather liberal.  The vast majority (almost all) of the recorded mis-installations consisted of not getting the straps (those holding the kid in the seat or those anchoring the seat to the car) tight enough.  Presumably some of these problems were so bad that the system would not have held in the event of crash.  But almost certainly most of them were still well within the range of functionality, just not up to to recommended standard (e.g., you can fit no more than one finger between the baby and the strap) which presumably includes a rather substantial margin for error (I doubt even an infant would slip out of the system through two fingers' worth of slack).

In spite of this, NHTSA and others are clearly trying to portray the statistic as if it represented widespread consequential failure that needs aggressive attention.  They do this even as they also report on the wonderful improvements in instructions and ease of use for this equipment, and their educational successes, and thus the huge reduction in risk achieved.  Not surprisingly, this resembles the tactics of the anti-tobacco activists in government and elsewhere who want to claim both (a) things are still terrible, so you need to give us more money and (b) we have done a lot to improve things, so we are worthy of more money.  Their method for resolving the contradictions in these claims is to shamelessly make both of them and hope no one will notice.  (The contrast with the anti-tobacco activists, however, seems to be that NHTSA is right when they claim to have contributed to substantial progress over the last decade.)

NHTSA, the press, and others quoting the statistic also fall into the bizarre pattern, which I have written about before, of treating a quantitative social phenomenon as if it is somehow a physical science constant.  Even if the endlessly repeated figure were an accurate portrayal of the situation in 2002, things have inevitably changed (pretty clearly for the better) in the ensuing decade.  It would not be much different if they reported alarmist statistics about how few people wear seatbelts based on data from 1970.

However, I do want to give them props for not falling into the absurdity of reporting the 72.6% from the original study, implying that they have that level of precision.  NTSHA may be a bit alarmist, but at least they understand the concept of rounding and precision.

Note that in fairness to the actual NTSHA scientists, Decina et al., there is nothing in their line of research that suggests the authors are trying to create propaganda.  Perhaps it would have been useful if the 2002 research protocol had called for separating minor problems from those that made catastrophic failure likely, but that was a limitation of the study.  Readers more familiar with the anti-tobacco (anti-soda, etc. etc.) "research" in which the authors are clearly aggressively trying to write propaganda will notice the contrast.

An additional point on the topic:  If the statistic were really an accurate picture of frequent current important failure, it would represent a remarkable process failure or equipment design failure.  That is, if 3/4 of parents really used this equipment in a way that was destined to fail if needed then the equipment design was terrible and/or some kind of professional intervention was needed (e.g., since it is accepted that the equipment be required by law then formal instruction or sign-off by safety personnel or licensed installers should be required too, since the requirement would be 3/4 moot otherwise).  And yet it gets blamed on operator error and it is considered acceptable to just lecture the operators (parents) for installing the seats incorrectly.

But in this case or any other, if 3/4 of operators are doing the wrong thing, then it is not really their fault.  It is a design flaw.  Do you ever notice those situations where in some place where the public/customers interact with an installed system, and there there is a sign scrawled emphatically telling people what to do, such as "insert card HERE!!!!!" or "DO'NT Use THIS DooR", directing them away from the obvious choice that everyone seems to try because it looks right.  This is usually accompanied by some nearby clerk who expresses exasperation about how *everybody* is so clueless that they cannot figure it out.  Somehow it does not seem to cross anyone's mind that the hardware/layout or process is what needs fixing, not the skills and intuitions of the majority of the population.

[Note:  operator error is often referred to, rather strangely, as "human error", as in "the crash was caused by human error".  This seems to imply that the hardware and non-proximate decisions (systems) were designed by chimpanzees or maybe cows, which I will grant sometimes does seem to be the case.  Also, credit to this taxonomy of causes of accidents goes to sociologist Charles Perrow, who also noted that there are very few accidents that do not have multiple component causes (epidemiology talk, not his language) such that hardware/design, systems, and operator decisions all contributed.]


  1. I recently ran across a webpage that contained a link with a form that I needed to fill out. When I clicked on it, I got the (frequently seen) pop-up that there was a mismatch in the security certificate. It was then that I noticed that the webpage had a note on it instructing the user to ignore the mismatched security certificate message. So someone took the time to modify the webpage, rather than resolving the security certificate issue. Cracked me up.

  2. Thanks for the additional example, Tim. Pretty funny.

    As another example of a different point, after posting this I ran across a month-old post by Stephen Budiansky (Liberal Curmudgeon blog) about doomsaying activists trying to have it both ways:

    "...while the entire presumable goal, purpose, and raison d'être of applied environmental science is to solve environmental problems, any environmental scientist who dares to suggest that problems are being solved is asking for trouble."

    Read more: http://budiansky.blogspot.com/2010/09/teflon-doomsayers.html#ixzz13JTM8swC



Note: Only a member of this blog may post a comment.