Medical Knowledge

Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Jim Manzi has a good reply up at TAS on our degree of medical knowledge, discussing an Atlantic article I also go into here.

While he makes a number of good points, I don’t think he quite addresses some of the issues raised by Robin Hanson and the original Atlantic piece. Manzi defends medicine in general; and this may serve as a useful corrective to those who believe that medical knowledge is completely useless. But with a few exceptions (maybe Robin Hanson), I don’t think many medical skeptics fall in that camp. Perhaps the quoted estimates of medical error are on the high end. But that doesn’t take away from the fact that there are serious issues in how medical knowledge is formed.

Take, for instance, several past Hanson posts. Doctors believe in breaking fevers, though there is no evidence that helps. Flu shots also don’t seem to work. I’ve also mentioned how uclers came to be declared a disease due to “stress”, when in fact they were clearly due to bacterial infection. Meanwhile, several large-scale tests of medicine use — from the RAND insurance study, or the 2003 Medicare Drug expansion — find minimal evidence that more medicine leads to better health.


I think our body of medical knowledge does illustrate how hard it can be to generate reliable knowledge, even in cases when we can easily run numerous experiments on a randomized basis. While Manzi emphasizes difficulties with long-term, behaviorally-oriented interventions; you can see that the corpus of verifiable medical mistakes is quite large and runs across several fields.

Manzi also argues for the difficulty with judging the effects of particular medical treatments when considering complex causal pathways and different lifestyle choices. This is a reasonable point — and one alluded to in the Atlantic piece as well (“Just remove all the meds”). But this is a point that goes against the tenor of his original essay — that randomized experiments can serve as a useful corrective in exactly the situations where we have causal density, where mere observational or correlation studies (even ones that “control” for various background characteristics) are insuffient to generate true knowledge.

If Manzi (of today) is correct; than the sheer complexity and chaos of understanding the functions of a human body defy the bounds of even randomized experiments, which by their nature are designed to test specific hypotheses, and rarely examine the impact of numerous treatments in different settings. If this is the case, it’s difficult to imagine any scientific procedure that would reliably generate medical knowledge. It’s also difficult to see randomization as a silver bullet in the social sciences as well.

I haven’t even gone into the several ways (as Heckman emphasizes) in which randomized trials may be flawed. While held up as the “Gold Standard” of estimating treatment effects, randomization as practiced faces many flaws. Randomized treatments cannot answer general equilibrium questions, and often, they provide estimates that only apply in limited domains, without broader generality. As the use of randomized experiments increases; scholars increasingly start answering addressing questions that are easy to answer (ie, do Sumo wrestlers cheat?) rather than tougher questions that do not have easy solutions.

I’m not against randomized experiments. But I don’t think they will fix the Social Sciences, given that they haven’t fixed medicine. Rather, it seems entrenched forms of human bias plague both fields.

7 Comments

  1. Gastric ulcers may not be caused by stress, but they can be caused by NSAID’s as well as by H. pylori. Granted, H. pylori is the most common cause of gastric ulcers.

  2. Thanks for the thoughtful comments.

    I don;t think that this post goes against the tenor of my earlier article at all, or that RCTs will “fix social science.” Let me see if I can summarize my beliefs about RCTs:

    1. Randomized experiments are essential to establishing the internal validity of a finding in an environment of sufficiently high causal density, as is found in most social science and some medical environments. (Typically therapeutic, as opposed to many surgical environments, although Pasteur’s anthrax vaccine experiments are an example of a therapeutic trial of an effect so profound that it was, in effect, moncausal.

    2. Therefore, we should extremely skeptical of any finding in such causally dense areas that are not validated through RCTs.

    3. The problem of external validity (or as I put it in more layman’s terms in the article and post, “generalization”) remains eternal.

    4. External validity in social sciences is especially problematic (which is one of Heckman’s central points). This means that in social sciences, we need multiple replications of any any experiment before we can rely on the finding. Arnold Kling, in reacting to the article, called this principle “Don;t trust one-offs,” which is a great way to put it.

    5. A greater use of RCTs would help social science, and should be pursued aggressively.

    6. RCTs are, in fact, becoming more central to social science.

    7. Some commercial enterprises have developed techniques that could be usefully imported into social science that would help with this.

    8. But, the causal density (and holistic integration, which is a term I define in the upcoming book, though not in the excerpted article) of the social environment should make us skeptical that even widespread use of RCTs will transform social science so that it could create the kind of practical benefits of physics, chemistry or biology.

  3. It is instructive to consider known cases of scientific fraud. The great majority of these are found in medicine. All the major medical journals have had a substantial number of papers withdrawn for outright fraud. How physics journals have had fraudulent (as opposed to “not even wrong”) papers?
    Or chemistry journals, geology journals?

  4. bob sykes, you may be interested in this study which Thorfinn linked to in his previous post on the topic:
    http://www.plosone.org/article/info:doi/10.1371/journal.pone.0010068

  5. TGGP, thanks for the citation. An interesting progression.

    You might like,

    William Broad and Nicholas Wade (1982), “Betrayers of the Truth: Fraud and Deceit in the Halls of Science,” Simon & Schuster,New York,

    My own personal experience in a big-time engineering college is that a significant fraction of faculty engage in questionable or fraudulent activities, perhaps 10% or so. When discovered, the administration always attempts a cover-up and usually succeeds. Whistle-blowers are usually punished, often severely. This is typical of all institutions; the reputation of the institution must be protected at all costs, even if obviously guilty people get away with their misdeeds.

    How much this affects the quality and reliability of the information available to scientists and engineers, I don’t know. In the hard sciences, errors and fraud are usually discovered, but it might take years or even decades for this to happen. In the so-called social “sciences”, errors and fraud persist indefinitely, viz. Margaret Mead.

  6. As a physician in private practice who left an academic job at a top 10 university 20 years ago because I thought the whole process was incredibly corrupt here are my observations:

    1. Bias – i think hanson has a point here.
    2. Big science/NIH and the distortions it creates: correct
    3. complex systems are not very easy or amenable to analyse.

    I would like to add that a part of the problem is also that there are way too many MD’s at academic programs. Most of them (not all) are mediocre and lazy. Hired mainly because faculty physicians do not like to dirty there hands on patients. They make less but they are a lot less procuctive than even their low pay (as an example the volume that we deal with in our practice requires 3-4 times as many physicians in an academic medical center. The salary differential is not 3- 4 times. The point I make is that the majority of so called academic ophysicians are basically low capability seat warmers. They realize that in order to get their next promotion they need to generate a volume of papers. Volume not quantity. So they generate volume. They have neither the skill sets or training or analytic reasoning to adequately do science. This corrupt endeavor is facilltated by a profusion of journals who need studies to feed tem. so garbage in garbage out. In my field, i come across 1-2 studies every couple of years where the right question and methodology was used. and where there is a semblance of realtionship between resuts and conclusions. Most times there is no basis for making the conclusions made from tha data presented. Bottom line- stupid people will publish stupid shit.

    So what is a physician who wants to do the right thing to do?

    my suggestion is that you have to go to time honored emiricism. ie if you are observant and intellectually analytical and curious. you take a study and see if it is consistant with your own clinical observations and physiologic understanding (and if need be biases). your clinical intuition, if you will. Ther worst offender in this culture of cargo-cult science are the professional associations and their so called consensus statements.

    Sounds neanderthal i know and i would not have had this answer 5-10 years ago, but i get progressively jaded as years go by

  7. Mead’s reputation may be making a comeback:
    http://savageminds.org/2010/10/13/the-trashing-of-margaret-mead/

Leave a Reply

a