03 December 2013

Why tobacco use modeling need economic mechanisms - FDA workshop slides

I have not been doing a great job of creating updates about my THR modeling work, so here is one easy very-partial remedy for that.  The FDA Center for Tobacco Products is holding a workshop about modeling and other methods that relate to their regulations and I am presenting a talk using these slides.

As those of you following my work on this subject know, I am rather critical of the usefulness of the existing "models" of tobacco use.  (For those not familiar, follow the tag on this post for previous and future posts on the subject.)  I use the scare-quotes because I argue that most of what are called models in this space fail because they are not actually a simplified version of the real system, but are really just complicated calculation tools.  They completely omit the underlying mechanisms of the system -- the consumer economics -- and thus just translate high-level statistics (e.g., "assume 2% of the smokers transition to e-cigarettes each year") into high-level outcomes (e.g., "the smoking rate over time follows this path").  This does not mean they are not useful, of course.  There is value in that.  But the value is calculating the answer to high-level hypothetical questions, not actually representing the system.

My argument in this talk is about how the lack of that real representation of the system means, most obviously, that it is impossible to make predictions about previously unobserved phenomena.  (If the only use of data is to say "we have observed that when X happened then Y resulted", you cannot say much about situations that have not happened before.)  But it also means that even the high-level predictions are likely to be wrong because they are based on a misuse of the data (which I call superstition rather than science).

I made the tactical error of offering to present on any of several aspects of my research agenda, but fortunately the organizers shared my opinion that this bit is the most crucial for people to understand at this point.  (Note to self: Don't count on that in the future.)  The talk is likely to come as a rather unwelcome coda (it is scheduled very late in the workshop) to a series of presentations about "models" that fail to do what I am saying must be done.  Of course, I might be pleasantly surprised and discover that my message has already been covered.  Such good news for the science would be bad news for my talk, of course, making it awkward with a lot of phrases like "this has already been discussed, but to reiterate the point".  But I am not optimistic/pessimistic that there is much chance this will occur.


  1. Forgive me if what I say is wide of the mark.
    It seems to me that a 'model' which is based upon 'what if' is not a model at all. It would better be described as 'a thought experiment' in the Einstein mode. Such a thought experiment might take this form:

    1. Suppose that smoking causes X number of deaths per an, of which there is some evidence, but still some uncertainty.
    2. Suppose that SHS is 1/100 the 'strength' of actual tobacco smoke inhaled.
    3. Then the number of deaths will be X/100..
    4. If X equals 400,000, then X/100 will equal 4,000.

    On the basis of the above, individual cities, zones, counties, states, etc, could easily be projected to have a specific number of deaths to several decimal places. Further, costs to health insurance providers could be calculated (in the UK, that would be the NHS). Further, the 'model' could be extended to actually predict the number of lung cancers, heart attacks, COPD events, etc. The sophistication would be limited only by the power of the program and the computer.
    As with Einstein's 'thought experiments', however, their power depends entirely upon the THE FACTS. In the Theory of Relativity, all of Einstein's conjectures depended upon the speed of light being a constant, regardless of the speed at which the emitter of the light was moving.
    Is it possible that the basis of the models which you speak of could be unsound? Because, if it is possible, then the models are worthless. The obvious simple example would be building a skyscraper. You could have the most wonderful computer simulations of stresses and structures, but all of these would be worthless if you failed to observe that the site of the proposed building was a swamp.

    1. We seem to have basically the same issues with that, but are using different words. I was emphasizing the fact that what you describe is basically a calculation and not a model. That is, there is really nothing about the process you describe that looks much like the real world, and a model should be a simplified version of the real world, not merely a calculating mechanism. In fairness, you could build on the tool you describe and build it up to look more like the world. Then it would enter a grey area about whether it was really a model or not. For example, you could run it out through time, calculating the number for each period in the future. I would argue that that is still not a model, but it often gets called that.

      One of the problems if you just have a calculating mechanism is the one that you emphasize: about all you can do with it is make up some numbers to plug in and see what happens. That can have value, but it cannot tell you much about what the world does, just what it would do if certain things are true. Running some calculations to test an intuition can be a useful check on guesswork, of course. A thought ("SHS smoke kills 40,000 Americans per year") is not the same as a thought experiment, in that it does not tell you anything by itself, but you can run some numbers to test it ("...oh, wait, it would have to be X% as bad for you as smoking to get that number -- I guess that is not plausible after all").


Note: Only a member of this blog may post a comment.