While he makes a number of good points, I don’t think he quite addresses some of the issues raised by Robin Hanson and the original Atlantic piece. Manzi defends medicine in general; and this may serve as a useful corrective to those who believe that medical knowledge is completely useless. But with a few exceptions (maybe Robin Hanson), I don’t think many medical skeptics fall in that camp. Perhaps the quoted estimates of medical error are on the high end. But that doesn’t take away from the fact that there are serious issues in how medical knowledge is formed.
Take, for instance, several past Hanson posts. Doctors believe in breaking fevers, though there is no evidence that helps. Flu shots also don’t seem to work. I’ve also mentioned how uclers came to be declared a disease due to “stress”, when in fact they were clearly due to bacterial infection. Meanwhile, several large-scale tests of medicine use — from the RAND insurance study, or the 2003 Medicare Drug expansion — find minimal evidence that more medicine leads to better health.
I think our body of medical knowledge does illustrate how hard it can be to generate reliable knowledge, even in cases when we can easily run numerous experiments on a randomized basis. While Manzi emphasizes difficulties with long-term, behaviorally-oriented interventions; you can see that the corpus of verifiable medical mistakes is quite large and runs across several fields.
Manzi also argues for the difficulty with judging the effects of particular medical treatments when considering complex causal pathways and different lifestyle choices. This is a reasonable point — and one alluded to in the Atlantic piece as well (“Just remove all the meds”). But this is a point that goes against the tenor of his original essay — that randomized experiments can serve as a useful corrective in exactly the situations where we have causal density, where mere observational or correlation studies (even ones that “control” for various background characteristics) are insuffient to generate true knowledge.
If Manzi (of today) is correct; than the sheer complexity and chaos of understanding the functions of a human body defy the bounds of even randomized experiments, which by their nature are designed to test specific hypotheses, and rarely examine the impact of numerous treatments in different settings. If this is the case, it’s difficult to imagine any scientific procedure that would reliably generate medical knowledge. It’s also difficult to see randomization as a silver bullet in the social sciences as well.
I haven’t even gone into the several ways (as Heckman emphasizes) in which randomized trials may be flawed. While held up as the “Gold Standard” of estimating treatment effects, randomization as practiced faces many flaws. Randomized treatments cannot answer general equilibrium questions, and often, they provide estimates that only apply in limited domains, without broader generality. As the use of randomized experiments increases; scholars increasingly start answering addressing questions that are easy to answer (ie, do Sumo wrestlers cheat?) rather than tougher questions that do not have easy solutions.
I’m not against randomized experiments. But I don’t think they will fix the Social Sciences, given that they haven’t fixed medicine. Rather, it seems entrenched forms of human bias plague both fields.