22 June 2010

Smokeless Tobacco Junk Science, the Original Winn Sin – Part 2, what the data shows

Following on the background from my previous post, this is further information about what Winn et al. avoided telling the world in their 1981 paper, thereby increasing how misleading the oft-quoted 50-fold risk estimate is.  (Readers encountering this in a newest-first ordering of posts may want to skip down and read the next (i.e., earlier) post first.  I thought I should break it up because the entire analysis is so long.)

Much of the bias is what I have labeled publication bias in situ (PBIS), authors making a biased choice, based on results that they prefer, of what to report from their data.  This primarily takes the form of running many statistical analyses of a dataset (each of which will produce a different result) and reporting only the results they like without even acknowledging that other analyses were even done.  This is a problem that calls a huge portion of the epidemiologic literature into question, though very few people seem to be aware of it.

I should point out to my readers that I have reanalyzed this data several times and written about the results, but for purposes that were not quite the same as those of the present analysis and so were not optimized for this purpose.  I did not re-run any statistics or otherwise re-access the original data for this.  So if someone wants me to turn this into a more formal publication for another forum, I would want to analyze the data again to be able to provide more precise reports of its contents.

1.  Winn et al. failed to communicate just how intense the subjects’ exposures were.  This is not a biased analysis of the data, per se, but is an important omission that is likely to mislead readers.  Winn collected data on how frequently the exposed subjects used snuff and, as I recall, the median was about 21 hours per day.  That means that most of them used it basically all the time – asleep, eating, etc.  The subjects started using at early ages, some before they were ten years old.  Not in the data, though undoubtedly known to Winn from her research, was that Appalachian women (the study was restricted to women) used powdered snuff by rubbing it all over the surface of all of their gums.  These unreported details of the exposure mean that any risk for cancer of the gums and cancer of the inner surface of the cheek that touches them (the particular rare forms of oral cancer, referred to above, that showed the elevated risk in the reported results) would thus be elevated compared to those for a typical ST user, even apart from the product being different.  Failure to report these obviously important facts appears intended to mislead the reader into thinking that the results are more generalizable than they really are.

2.  The study’s main result is not directly relevant to the 50-fold figure, but it provides important perspective that should be considered.

The study is often cited as showing a 4.2-fold increase in risk from using smokeless tobacco.  This, of course, only applies to the particular form and intensity of usage noted above.  But this risk is also only for the non-black women who reported ever using snuff.  The much lower result for black women is almost never mentioned.  This does not seem to be because of the oft-quoted observation that the U.S. government did not care about black people, but rather because it makes the result seem larger than it actually was.  It turns out that if this unjustified exclusion is not made (i.e., no race is left out of the “real” result) that the risk is below 4 (though it remains above 3.5 because of the smaller number of blacks in the sample so when someone does not exaggerate the precision and correctly rounds the results to a 4-fold increase, is actually still represents the statistics).  As an interesting aside, the 4.2 is often reported as being for “white” women (as the author’s themselves misreported it), but a look at the data shows that it is actually for non-black women, including native Americans along with whites, a very odd unexplained choice that further suggests trying to cook the data.

More significant for present purposes is the definition of exposure that you may not have even noticed when reading the previous paragraph:  Subjects who reported ever using snuff were included among the exposed.  Anyone doing epidemiology today would recognize that some distinction needs to be made between long-ago former users and recent users, and would recognize that the failure to do so in this study – while perhaps understandable given the primitive state of epidemiology in the 1970s – makes the results suspect.  This is especially bad given that many of the “exposed” used very little snuff in their lives and quit decades before the study.  Winn et al. could have separated out former from current users, and made even finer divisions than that – they had the data to do so.

If exposure measurement were perfect, inclusion of former users in the exposed group would likely cause an underestimate of any risk, since long-quit former users would have the lower risk of a non-user and so would dilute the average risk for users.  But this rule of thumb – that the bias would be downward – is misleading in this case for two reasons.  One requires some context, and so is described below.  The other is the well-known problem of recall bias:  Someone who is suffering from OC will naturally seek an explanation for the disease, thinking hard about possible relevant exposures, wanting to recall them rather than hide them.  By contrast, a healthy person who is chosen at random for the comparison group has little incentive to try to recall or admit to long-past substance use.  Similarly for recent decedents (part of the sample compared women who had recently died from oral cancer to those who had recently died from other causes), the relatives of those who suffered from oral cancer were more likely to have recently heard the subject recount a story of long-past oral product use, during their time of her final illness, than those who died suddenly of a heart attack.  Thus, including former users tends to dramatically increase this bias which overstates the association of the exposure and the outcome (it is possible that current users or relatives of those who used until their deaths could also deny the exposure, of course, creating measurement error even for current users, but misreporting former use does not require such blatant conscious lying).

Returning to the relative risk of about 4, if this is the real main result, whatever its upward biases, why does the much larger figure of 50 even exist?  There actually is explanation can be attributed to a legitimate honest analysis, though one that I need to explain here because the authors did not do so in their overly-abbreviated paper and Winn did not include this analysis in her much more complete dissertation.  (The latter observation is interesting in itself:  Though I can come up with an honest explanation for the existence of this analysis, it is a bit suspicious that Winn, and her advisors at what was one of the best epidemiology programs, did not feel the analysis was worth including in her dissertation, but the political actors who hired her after graduation wanted to add it.)

The analysis that included the 50-fold result appears primarily intended to help support the hypothesis that the observed actual risk (i.e., the 4) represents a real causal relationship rather than confounding (which is jargon for: there was something about those who used snuff that was different from those who did not – other that the use of snuff – and those other differences rather than the snuff were what caused the difference in disease outcomes).  Sometimes a behavioral risk reflects a different personality type that is just generally less healthy for a variety of reasons, and so a naive analysis might suggest that the behavior is causing the risk when it really is not.  For example, as I have previously argued, the Henley/American Cancer Society reports that are sometimes thought to show that there are disease risks from ST use really just show that among relatively wealthy, mostly middle-class, socially connected people in the U.S. in the mid-20th century, those who used ST were different from those who did not use tobacco – big surprise!  ST users had a higher risk for a wide variety of diseases, including violence/trauma and cirrhosis, reflecting diseases that were obviously not caused by ST, thus suggesting that all of the other elevated risks were also not caused by ST.  Indeed, when even very poor control variables were used to partially adjust for these differences among, most of the apparent risk disappeared.  The nature of the Winn study means that risks of diseases other than OC could not be measured, but a similar analysis could be done by looking at the cancers that were at the specific anatomical place where the snuff was used to the much more common cancers elsewhere in the mouth.

Another observation that can help support a conclusion of causation rather than confounding is the “dose response” trend, such that people who are most exposed have greatest risk, people with less exposure have some elevated risk but not as much, and those who are unexposed have lowest risk.  While it is quite possible for the effect of confounding to also have such a dose-response trend, it is often reassuring to see some trend before concluding causation.  Thus, the analysis that produced the 50, which separated the rare proximate cancers and looked as dose-response (albeit dishonestly – see below) could be seen as a reasonable attempt to test the claim of causation, or at least it could have been seen as honest and reasonable if the authors did not try to interpret the results has having any meaning beyond that.

[Aside:  It is important to not overstate the value of these observations in discriminating causation from confounding or errors.  Sometimes people who do not understand the nature of scientific inference refer to these considerations as “causal criteria” there there is actually no such thing as causal criteria, let alone method for proving an association is causal.  But that is another story.]

3.  Cutpoint bias:  The choice of points at which to divide up continuous data (like the number of years someone used snuff) into categories offers great opportunities for biased analysis.  Since there is no necessarily right way to do this – one method looks just as good as any other if done (i.e., without reference to stated standard or previously used methods, as was the case in the Winn paper) – it is impossible for readers to see that the choice was biased.  We recently analyzed this problem and proposed a solution, though implementing it requires that the authors desire to report honest results, rather than to take advantage of the opportunity to cook up results the authors prefer, or that the editors/readership understand the problem well enough to demand it.

If the reader has access to the data, however, it is possible to assess what the result would have been if other cutpoints had been used.  In the case of the Winn data, most other choices that could have been made produce a dramatically less clear dose-response trend, and a much smaller largest relative risk (i.e., less than 50).  The authors did not quite choose the most biased results possible; there are (just) a few ways to get (slightly) more extreme results from the data by choosing different cutpoints.  But authors who are trying to bias their results this way need to make their methods appear unbiased and so need to restrict themselves to round numbers and other characteristics that make the cutpoints seems like a “natural” choice, and given this constraint they did about as “well” as they could.

4. Choice of dose measurement:  Someone reading up to this point may not have questioned the reference to dosage defined in terms of counting up the years during which the subject consumed any snuff.  Presumably Winn et al. were counting on exactly the same failure to notice the oddity.  It is almost unheard of to measure dosage for an exposure like ST this way instead of in terms of total consumption (e.g., the “pack years” measurement for cigarette dosage).  Even intensity of current consumption (quantity or time used per day) is a probably a better measure of exposure than is a measure that conflates someone who used a few times a year with someone who used large quantities for 24 hours a day for the same period.  If years of use were the only proxy for total exposure that the authors had, it might be worth analyzing the data with this as a proxy for dose (with a clear statement that they recognize it is not optimal).  What readers who has not seen the data would not know is that Winn had data on intensity of consumption and total consumption, though this is not mentioned in either her dissertation (which did not include any analyses based on dosage, and so did not have need to mention it, though it might have been useful descriptive information) or the Winn et al. paper.

I bet that you can see where this is going.  Yes, you guessed right:  If you use the more legitimate measure of exposure dosage, it is impossible to get a dose-response trend that is so clear or such a big number at the top, even playing with the cutpoints.  It is possible that there is an honest explanation for the choice that the authors made, but since the paper does not even acknowledge that this choice was made, let alone defend it, it seems like a stretch.

5. Trimming away the unexposed OC cases:  Most readers of epidemiology who have a bit of knowledge about how to critically analyze results think to look at the number of exposed cases.  This number usually appears in the text, though seldom in the press release.  When it is quite small, as it often is, there is greater potential for certain biases and definitely greater instability.  But for the 50-fold result it is actually the low number of unexposed cases that drive the result.  But wait, you might ask, is not the number of unexposed cases the same for the main analysis as the one that looks at the highest exposure level?  It turns out not, for several reasons.  The first reason is legitimate – recall that this analysis is restricted to the rare subset of cancers, which eliminates a lot of the cases.

But then Winn et al. decided to eliminate all of the deceased subjects, making some vague allusion to the possibility that relatives who were asked to recall periods of use would be less able to recall it than living subjects (this allusion is made especially vague since the very brief paragraph that includes it has prose editing errors that render it nonsense).  This concern might be valid, but it seems like it is less of a problem than is throwing out half of the already rather sparse data, and so conveniently removing all but 2 of the unexposed cases.  Because of this, the trimmed data makes it appear that almost no one who does not use snuff gets these particular cancers, and thus the relative risk is dramatically elevated. 

If the authors had been genuinely concerned with recall bias, they would have not included long-past former users in the exposed category, as noted above, since this dramatically increases the chance of recall bias.  Indeed, this is where the exposure definition really gives some misleading results.  Presumably most readers would be bothered if they had been told that subjects who had not used snuff for half a century before the study, and who had only used it briefly before that, were classified as exposed rather than unexposed.  As it turns out, if you change the classification for such individuals, the number of unexposed cases doubles and so the relative risk drops by about half.  Seriously.  If we merely do not count someone as “exposed” for purposes of the time of the study if she only used for a couple of years in her youth – half a century before she developed OC and well before the start of the Great Depression just to put that in perspective – then the 50 drops to 20.  Further tightening the definition of exposure causes further changes.  Even that 20 is still an exaggeration, but it is a much smaller exaggeration.

Note that this is a case where the recall bias does not create a downward bias in the risk estimate, as the naive rule of thumb says it will.  There will be some “exposed” noncases that had not used for decades also, who will thus increase the number of unexposed noncases if classified correctly, which would slightly increase the relative risk.  But in the dose-response analysis, these are stripped out of the low exposure category (a long-former users could not have used for 50 years), which does not affect this estimate at all, and added to the never-user noncases which had not been been trimmed down to 2 and so the effect is not as great.  If they were stripped out of the exposure group that was being studied (as would be the case if only exposed-vs-unexposed were being considered), this would increase the relative risk estimate, balancing the decrease, but that is not what happens in this case.  Moreover, I believe that it turns out in this particular case that the subjects who were most absurd to count as exposed happened to be cases (perhaps because of that recall bias problem already noticed).


In summary, a review of the data that was used to produce the Winn et al. paper reveals several ways in which the analysis was biased to exaggerate the statistics that were reported.  Such methods of biasing the data in ways that can easily be hidden from most readers appear to be fairly common in analyses by activists, and this PBIS calls into question large parts of the epidemiology literature.  It is difficult to even count all of the reasons why the 50-fold risk number has no place in an honest discussion of THR or the risks from ST more generally.  Given that many of these reasons (from Part 1 of this analysis) are well-documented or simply obvious, anyone who presents that number is either trying to lie or is so ignorant of the science that they really should not pretend otherwise.  As Part 2 of the analysis shows, those accusations go doubly for Winn, the NCI, and others involved with the analysis, since unlike most readers they know the additional hidden dirty secrets that can be found in the data.

Smokeless Tobacco Junk Science, the Original Winn Sin – Part 1, background

This seems like a fitting first post for this blog, since the paper in question played a substantial role in creating the path that led to much my current work in both tobacco harm reduction (THR) and recognizing the importance of publication bias in situ.


A few weeks ago, Brad Rodu published an analysis ("Winn’s Legacy: The Fifty Fabrication") that pointed out how a misleading number from a thirty-year-old paper continues to appear in the anti-THR literature, in statements by the U.S. government’s executive branch to the U.S. legislature (i.e., the bureaucrats are lying to the peoples’ representatives), and in other forums.  The figure is the erroneous claim – based on the the 1981 paper by Winn and her then-new colleagues at the U.S. National Cancer Institute, which was based on her dissertation research from the 1970s – that smokeless tobacco (ST) increases the risk of oral cancer (OC) by a factor of fifty.  This number misled many well-meaning anti-tobacco activists to discourage THR back in the 1980s, and now provides ammunition for the dominant anti-tobacco extremist faction to discourage THR even though they surely know they are lying when they present the number.  As a result, it has contributed to the deaths of countless smokers who would have switched to the low-risk alternative had they known the truth.  While it is impossible to quantify the counterfactual, the intensive use of this number in anti-THR propaganda, particularly 5 to 15 years ago,  means that it probably had substantial independent effects, and it seems reasonable to guess that it killed thousands of smokers who would have otherwise been saved.

It is worth noting that this is not a case of innocent scientists publishing a scientifically valid result and then activists taking it and misusing it:  The authors put the 50-fold figure in their abstract, even though they undoubtedly knew how utterly misleading it was.  The NCI is among the worst historical offenders in publishing disinformation to pretend that ST causes risks similar to those from smoking, including the fifty claim, and is still committed to misleading smokers about their options for lowering their risks.  Winn herself has perpetuated the fifty estimate, though she obviously knows what it really means and that it is almost always misinterpreted, as well as knowing what manipulations of the data it took to cook up the number (the main point of the present analysis).  I am aware of no case where she or NCI made an effort to correct the misperceptions.  Winn now seems intent on ending her career by redoubling the needless mortality she caused by discouraging harm reduction, as noted by Rodu here and here.  Her initial contribution to this at the start of her career – when there was less definitive knowledge that the risks from ST are very small and THR was not so clearly the best public health intervention regarding smoking – might be seen as an accidental sin, but it is difficult to see how the current actions are forgivable.

The purpose if the present analysis is to report on some biases in that estimate that Rodu could not have included because he, like most people in the field, have never had a chance to look at the original data.  In contrast with more honest sciences, it is common for epidemiologic data that is used to produce published articles, even those that have huge policy implications, to be kept secret in perpetuity.  This seriously torturing the definition of “publication”, renders peer review almost meaningless, and means that a lot of policy is based on junk science.  Fortunately, I am one of the few researchers who has a copy of the Winn data. 

[Aside:  To clarify, I believe that the causal pathway is not that fundamental problems with the science of epidemiology caused the acceptance of hiding data.  Rather, an unfortunate accidental historical path made it acceptable to keep data secret resulted in epidemiology, attracting people whose work could not stand up to scrutiny because of its low quality or political bias.  But because such people then came to dominate the field - particularly in areas that are highly political - and thus its publications, rules, gatekeeping, budgets, etc., what might have been a temporary problem of a young science became institutionalized by those in power who do not want their junk science exposed, or do not even realize they are doing junk science but simply want to preserve their empires.]

I have written about what the Winn data shows before, but clearly did not do so loudly enough, since Rodu was surprised when I submitted some of what appears below as a comment on his blog.  Instead of posting that comment, he suggested that I write something in greater depth, and so here it is (and thanks to him for the suggestion and comments on the draft).  For those who have stumbled across this – perhaps because it is such a good example of how unethical biases are introduced into epidemiology research reports – but do not know the basic facts about smokeless tobacco, you might want to consult the FAQ at TobaccoHarmReduction.org or the background chapters in the Tobacco Harm Reduction 2010 Yearbook.  For those who are aware of the basic facts but not the Winn legacy, see the Rodu posts that are linked from here.)

Before addressing what can only be learned by looking at the data, I will start with a few epistemological observations about that 50-fold increase statistic that are sufficient to show that approximately every use of that number represents either an intentional lie or fundamental ignorance of basic scientific research methods and/or the content of the original article.

1. Epidemiologic estimates are not physical constants.  As anyone who paid attention to even one decent class in epidemiology knows, the results depend on the specifics of the exposure (which likely changes over time), the population and their other exposures (which inevitably changes over time), and the exact outcome being measured (which might also change based on contemporary assessments of the right measure of a phenomenon).  Also, methods for analyzing the exposure-disease-population combination in question often improve, rendering old analyses obsolete (see the point below about “ever users”).  Thus, anyone who cites an effect estimate from more than three decades ago as if it were a constant clearly has no business reporting on health science – they obviously do not understand it.

I would hope that it is obvious to my readers that other social science measurements (e.g., what is the benefit of a college degree? what portion of babies are raised by unmarried couples?) are not constants over time and across populations, and would not quote an estimate based on a study of mostly rural elderly women in North Carolina the 1970s as if it applied to everyone and were still true.  Though even something that obvious is not obvious to everyone:  For example, you still occasionally see the claim that there are exactly 76 million cases of foodborne disease annually in the U.S., which is based on an extremely rough modeling exercise from 1999, which in turn is based on studies from earlier than that (and which, incidentally, I and others showed to be a bad estimate even at the time, but that is a different point).  Apparently it does not even occur to people who repeat that number that the population size, the quality of the food supply, and many other factors have changed dramatically in more than a decade, and thus even if the number had been exactly right at the time it would not longer be.

In fairness, a relative risk estimate will often be more stable over time than some of these other social science measures, but it is still not stable.  The exposure (product type, etc.) changes, populations change (which mainly means that causal co-factors and competing cause of censoring change), and even disease ascertainment changes (diagnosis, definitions).  So, in short, even if the Winn estimate had been unbiased and meaningful at the time it was published, it would be of only historical value now.

2. To repeat the point that Rodu emphasizes in his post, even if the estimate were unbiased and relevant to today, it was not an estimate of the risk of OC.  First, it was limited to a particular rare form of OC.  Indeed, the specific analysis in which the number was presented actually emphasizes that other, much more common, forms of OC did not show measurable increase once the rarer cancers were separated out.  So claiming that the statistic represents the risk of OC as a whole is like claiming that slicing bagels is the leading cause of traumatic injury because it is the leading cause of one particular traumatic injury (laceration injuries to the hand that require urgent care) that represents a tiny fraction of all injuries.  This is particularly important because the particular specific OCs that generate the statistic are so incredibly rare (and even far more so in the absence of smoking and heavy drinking) that even if there were a 50-fold increase in risk, this would not be very significant in terms of lifetime disease risk.  But since anyone trafficking in the large number undoubtedly realizes that most readers/listeners will think otherwise when they hear the big number, even if the number were a valid measure for the rare disease risk, presenting it without the caveat that the risk is trivial would still constitute scare tactic propaganda rather than the honest communication we should demand of the government and others.

Second, the result applied only to the group who had used ST for more than 50 years.  This point is sometimes alluded to when the number is cited, with a phrase like “those using the product the longest”, though that phrase is not something readers will pay much attention to (as the authors of such statements no doubt realize) and fails to really capture the point that these subjects had been using ST constantly since the 1920s or earlier (and the data analysis, below, shows that their usage was even more extreme than that implies).  Moreover, most of the asides about the exposure group seem to use a phrase like “the heaviest users”, which implies to the readers that a 25-year-old who uses a lot of ST is at this level of risk.  Rodu, I, and others have written about these points extensively, so I will not belabor them.

3. The exposure studied by Winn was mostly the local variety of powdered dry snuff preferred by traditional Appalachian women in mid-20th century and, more so, earlier than that.  This means that not only do exposures change over time, as noted above, but that even at the time of the study this exposure was different from the common exposure (chewing tobacco and moist snuff).  Moreover, the population itself is rather unusual, which further erodes the generalizability of the result.  The one or two other studies that were able to separate out a tiny bit of data for this particular exposure and population also reported a measurable risk for oral cancer.  For those who do not know, these represent outliers from the numerous studies of all American and Swedish ST products that have shown that there is no measurable risk of OC. 

For more information, Rodu has written extensively about this point, as have I and others.  Some of those authors have concluded that the Winn study’s main result (not the cooked up 50 statistic - see below) and the other smaller studies mean that those archaic products caused a substantial risk for oral cancer.  Others suggest that we cannot be sure this is why the Winn study is such an outlier, though it is a plausible hypothesis, and we will never know for sure.  But either way, this means that even if the 50 statistic were an accurate estimate of something, it would have basically no relevance to the products that people use today.  Clearly it would have absolutely no relevance to the modern products that are promoted for THR.

4. The result is extremely statistically unstable (i.e., was very dependent on the luck of the draw in terms of who ended up in the sample).  This makes the result very easy to manipulate in the ways described below as PBIS.  Even apart from the points below about how the data was distilled to get an impressive result, the mere fact that it is so sensitive to the particular sample Winn chose (which people who know some statistics might know by the phrase “very wide confidence interval”) means that the result should never be reported as if it were a precise estimate of the exact risk.  If this were the only concern, one could still say “a very large multiple” or something like that, but it is misleading to imply that we actually have such good information that we can quantify the estimate.

5. A final reason why this number should not be cited as a reason to avoid THR or ST more generally is a bit more subtle, and requires knowledge of the world and not just statistics, but should be instantly recognizable as valid:  Even if it were true that someone initiating ST right now would have a 50-fold increase in OC risk fifty years from now, who cares?  Seriously.  Anyone with enough literacy, wealth, and motivation to have access to this propaganda is of a social class high enough to, almost certainly, have access to high-probability cure for OC fifty years from now.  Just think about the progress in medical technology in the last fifty years, and then about the rate of acceleration of technology.  This is nothing like the heart attack that might kill a fifty-year-old smoker tomorrow or the emphysema that will likely be irreversible in his lifetime.  A dramatic multiplication of someone’s (very very low baseline) risk for oral cancer fifty (or even forty and probably even twenty) years from now simply does not matter very much.  This is obviously not to say that we should not endeavor to prevent cancer or that current OCs are not terrible diseases, of course, but discouraging a behavior that could save many lives (via THR) or that people simply really like based on claims of a few cancers that will not occur until they are extremely likely to be curable is obviously indefensible. 


With all that as background, I will no proceed in Part 2 to discuss what is not widely knowable about the Winn paper, because it requires analyzing the data.  One might argue that since the result is so clearly irrelevant for the above reasons, there is little point in this.  But since those incredibly obvious reasons do not seem to have stopped the disinformation, it cannot hurt to pile on a few others.  In addition, this serves and important illustration of the methodologic and ethical problems that are rife in epidemiologic publishing, particularly in areas where those reporting the results are more activist than scientist.  For example, someone who realizes how this number was cooked will be more likely to see that the anti-THR propaganda that has been published by the Karolinska Institute over the last few years is pretty clearly cooked.  Oh, and the full reference for the Will study is N Engl J Med. 1981 Mar 26;304(13):745-9, Snuff dipping and oral cancer among women in the southern United States, Winn DM, Blot WJ, Shy CM, Pickle LW, Toledo A, Fraumeni JF Jr. – I mention this as a hint to teachers:  If you are looking for teaching articles to demonstrate dramatic over-conclusion, naive epidemiology methods, and unsubstantiated policy prescriptions, you really cannot beat the New England Journal of Medicine.

10 June 2010

Introduction

Welcome to my new blog.  I created this as a home for analyses that do not fit the TobaccoHarmReduction.org blog, either because they are on a different topic or because they are longer and more detailed than what normally appears on that blog (an awkward heterogeneity was occurring there).  Any quick-hit posts on THR I write will still appear over there, and I will try to post a note there pointing to here when I publish an analysis on that topic.