Substack cometh, and lo it is good. (Pricing)

Complex science is very hard

Three articles which illustrate the difficulty of the sort of science which tackles what Jim Manzi would term phenomena characterized by high causal density. First, the simplest one is the report that extrapolating from some mouse models to human biological systems may be problematic. Anyone who has talked to human geneticists who use mouse models is aware that these inbred lineages can be somewhat particular and specific. Order the wrong mice, and all of your experimental designs might be for naught. So the result is not surprising, but it seems useful to have it documented in such a concrete fashion (though this has been reported in the media before).

Second, a long piece in The Chronicle of Higher Education on the problems in replicating ground breaking research in the area of priming. This may be a case of a robust result which turns out to fade into irrelevance as time passes, and illustrates the fundamental problems of attempting to do sciences on humans; we’re diverse and protean. I think the jury’s out on this, and we’ll wait and see. Fortunately this probably won’t be an issue we’ll be debating in 10 years, as replications will start to occur, or, they won’t.

Finally, a moderately scathing review in The Wall Street Journal  of the book Blindspot: Hidden Biases of Good People. Here’s the final paragraph:

There is far from a consensus about the IAT—a meta-analysis, you might say, is overdue. It turns out that the authors themselves published one in 2009, reviewing 184 independent samples and nearly 15,000 experimental subjects. The result: The IAT was very weakly correlated with other measures, failing to account for more than 93% of the data. Interestingly, Ms. Banaji and Mr. Greenwald don’t report this in their book [the authors of Blindspot]. Perhaps a blind spot?

You surely know about IAT, the Implicit Association Test. You’ve probably even taken a test online, purporting to measure your bias against particular groups (I have). But here is where “inside knowledge” counts. Years ago a friend who was a cognitive psychologist told me privately that he and many others within the field were very skeptical of the utility of these tests to predict anything of substance, even though they were media friendly. This individual has a good track record, as he was the one who alerted me to the serious problems with Jonah Lehrer’s work as far back as 2006.

Does this mean that you should ignore all science which derives from attempting to infer associations in domains where complexity is the rule? Not at all. But caution is warranted. The reality is that these are the areas where we as humans need to go to discover novel and powerful patterns. But because these are often social or medical domains which have immediate real world consequences we need to be methodologically sound, and not jump the gun. And, unfortunately, excessive immediate and early attention in the media is probably a very bad, perhaps negatively correlated, proxy for how solid a given result will be in the long term.

Posted in Uncategorized

Comments are closed.