Friday, September 14, 2007
In his September 14, 2007 op-ed piece in the New York Times, David Brooks tells his impression of the latest research in cognitive ability. Unfortunately, he not only misses the forest, but he bungles a few trees as well. Article and comments below.
A nice phenomenon of the past few years is the diminishing influence of I.Q. Right out of the block he is off. In what domain was there once a non-zero IQ-outcome relation, but now, X number of years later, the relation has shown a systematic decrease? From the generality of the statement, one would expect this to hold across most, if not all, pertinent domains (e.g., occupation, academic success, etc.). However, that is not the case. Not only do the IQ-achievement, and IQ-occupation relationships still hold, but now there is a burgeoning new field in the area: cognitive epidemiology, that looks to see how health outcomes are related to cognitive ability. Deary et al give a terse summary here, and Gottfredson gives a conceptual overview here. But, perhaps more interesting, researchers who have no interest in intelligence per se are finding similar results: a case-in-point is Yakov Stern's cognitive reserve research that shows people with higher IQ scores tend to have have less severe symptoms of Alzheimer's symptoms. As this is a new area of inquiry, the exact nature of the relationship has not been identified, but one thing we can say for sure is that there is no diminishing influence of cognitive ability. For a time, I.Q. was the most reliable method we had to capture mental aptitude. People had the impression that we are born with these information-processing engines in our heads and that smart people have more horsepower than dumb people. These two statements have little to do with each other. IQ (at least as derived from a Full Scale score) has been, and still is, very reliable for most age groups and subpopulations, no matter how you measure reliability. For example, the Woodcock-Johnson, one of the more theoretically sound measures of cognitive ability, reports in their new normative update that the coefficient alpha values (which are a lower bound of reliability) above .90 for all ages ranging from 3 to over 80. Given that the maximum value alpha can take is 1 (under almost all circumstances), this is pretty good evidence. If you look at the technical manual for the Wechsler, Stanford-Binet, or Reynolds Intellectual Assessment Scales, you'll find very similar values (I refer to these only because their norms span a very large age group, and the full scale score is derived from multiple subtests). I challenge Mr. Brooks to find a more reliably-measured psychological construct in psychology, nay, in the social sciences. The second statement, while perhaps overstated, is true. People are born with brains, these brains process information, and smarter people (as measured by IQ scores) tend to process information faster (see, for example, here and here). What impression should people have instead? People are born with a blank slate and all of life is little more that the acquisition of stimulus-response patterns? Skinner died in the 1990s, and strict adherence to this view died long before that (a great book about this). And in fact, there's something to that. There is such a thing as general intelligence; people who are good at one mental skill tend to be good at others. This intelligence is partly hereditary. A meta-analysis by Bernie Devlin of the University of Pittsburgh found that genes account for about 48 percent of the differences in I.Q. scores. There's even evidence that people with bigger brains tend to No disagreement here. But there has always been something opaque about I.Q. In the first place, there's no consensus about what intelligence is. Some people think intelligence is the ability to adapt to an environment, others that capacity to think abstractly, and so on. Ah, the slippery slope begins. These arguments are so old, and well-answered in the literature that it is almost painful to repeat them. I refer the interested (and Mr. Brooks) to Seligman's phenomenal, non-technical introduction, as well as Deary's brilliant literary corpuscle. First, IQ and intelligence are two different things. One is a measuring instrument's scale and the other is a psychological construct that is measured, to one degree or another, by an IQ test. We don't confuse inches and paper, so why do we confuse IQ and intelligence? Second, few scholars actually study intelligence. While the word might be used in common parlance, there is no common definition. Instead, most serious scholars study general intelligence (g) or one of its sub-constructs (e..g, fluid abilities, crystallized abilities; see here or here or here). Once you make the jump to g, the definition becomes much more consensual. There are technical debates (as there are in any branch of science), but it's measurement (by factor analysis of one flavor or another) is virtually undebated. For most purposes in daily life, it is OK to quasi-equate intelligence and g, as well as IQ scores and intelligence, but they really are quite different concepts. Then there are weird patterns. For example, over the past century, average I.Q. scores have risen at a rate of about 3 to 6 points per decade. This phenomenon, known as the Flynn effect, has been measured in many countries and across all age groups. Nobody seems to understand why this happens or why it seems to be petering out in some places, like Scandinavia. IQ scores, across generations, need re-calibrated for valid comparisons. We have ways that do this very well (latent trait models), that have very sound theory behind them. You have to periodically re-calibrate your bathroom scale, and you have no question about what it is measuring; why should IQ be any different? As a side note, this phenomenon is not at all confined to IQ tests, and it has been known about in the psychometric literature for decades, although it is called item parameter drift there. Moreover, just because there is no consensus as to why cross-generational scores tended to rise in the mid-twentieth century, this does nothing to invalidate the validity of interpreting IQ scores within a generation. I.Q. can also be powerfully affected by environment. As Eric Turkheimer of the University of Virginia and others have shown, growing up in poverty can affect your intelligence for the worse. Growing up in an emotionally strangled household also affects I.Q. One of the classic findings of this was made by H.M. Skeels back in the 1930s. He studied mentally retarded orphans who were put in foster homes. After four years, their I.Q.'s diverged an amazing 50 points from orphans who were not moved. And the remarkable thing is the mothers who adopted the orphans were themselves mentally retarded and living in a different institution. It wasn't tutoring that produced the I.Q. spike; it was love. Brooks is telling all parents of children who have Mental Retardation or Borderline Intelligence that their children's low cognitive ability is a direct result of parental inadequacy. If these parents would love their children more, the Mental Retardation would go away. If I were king, I would mandate that any person with the gumption to make asinine statements like this do two things (a) read Spitz's chef d'oeuvre, and (b) spend a week with a family who have a child diagnosed with Mental Retardation. Not just a daily visit, but an in vivo experience. Then get back to me about how easy it is raise the cognitive ability of people with mental retardation. By the way, Turkheimer's studies look at the ability of the environmental variance to modify heritabilty estimates. Specifically, people who grow up in more impoverished environment have a more variable environments, which, almost by definition, decreases heritability estimates. This is a very long cry from showing "growing up in poverty can affect your intelligence for the worse". Then, finally, there are the various theories of multiple intelligences. We don't just have one thing called intelligence. We have a lot of distinct mental capacities. These theories thrive, despite resistance from the statisticians, because they explain everyday experience. I'm decent at processing words, but when it comes to calculating the caroms on a pool table, I have the aptitude of a sea slug. What? A few paragraphs ago general intelligence existed, now it doesn't? Anyway, it is an awful shame when everyday experience does not map onto what data tell us: Beth Visser recently (gasp!) gathered data to test Gardner's theory. What did she find? Basically what John Carrol said she would find a decade ago: these multiple intelligence all positively correlate (sans kinesthetic intelligence) and a strong g factor can be extracted when the measures are factor analyzed. I.Q., in other words, is a black box. It measures something, but it's not clear what it is or whether it's good at predicting how people will do in life. Over the past few years, scientists have opened the black box to investigate the brain itself, not a statistical artifact. I wish I had the luxury of being able to write blatantly false statements in a national paper. There is over 100 years of empirical literature investigating the construct validity of IQ. There is also 100 years of literature examining what, and how well, IQ scores predict life outcomes. A simple perusing of Jensen's g factor or Brand's g factor (this one is even available for free!) would have sufficed here; but who wants data to interfere with a good opinion? Now you can read books about mental capacities in which the subject of I.Q. and intelligence barely comes up. The authors are concerned instead with, say, the parallel processes that compete for attention in the brain, and how they integrate. They're discovering that far from being a cold engine for processing information, neural connections are shaped by emotion. ...and you can read books about journalism in which the subject of sophism barely comes up. Namely because the books are concerned about journalism, not logical arguments. Why would a cognitive scientist who is writing a book about attention necessarily include a chapter about intelligence? As a rule, cognitive scientists tend to be concerned with general processes, not individual differences. The field can learn much from each other, but they are concerned about very different areas of investigation. Antonio Damasio of the University of Southern California had a patient rendered emotionless by damage to his frontal lobes. When asked what day he could come back for an appointment, he stood there for nearly half an hour describing the pros and cons of different dates, but was incapable of making a decision. This is not the Spock-like brain engine suggested by the I.Q. By all means, lets infer from one person with severe brain damage to the entire population. But if we want to play this game, I had a patient once who had just started Kindergarten, but could do addition, subtraction, multiplication and long division (the latter of which he deduced how to do pretty much on his own). He did not need a school to teach him any of this, so lets get rid of elementary schools for everyone. After all, if my patient could figure out long division, so should every other 5 year old. Today, the research that dominates public conversation is not about raw brain power but about the strengths and consequences of specific processes. Daniel Schacter of Harvard writes about the vices that flow from the way memory works. Daniel Gilbert, also of Harvard, describes the mistakes people make in perceiving the future. If people at Harvard are moving beyond general intelligence, you know something big is happening. Harvard never was a bastion for the study of general intelligence. It was the University of London. In fact, except for Yerkes, Herrnstein, and, to some extent, Pinker, I can't think of too many profs. there who contributed much to the study of general intelligence. And since when did Harvard's Psychology department become the measuring stick by which the importance of a research agenda was measured? I'm sure much of the work they do there furthers the general field of psychology, but what makes their research more special than, say, Berkeley, Stanford, UT-Austin, etc.? The cultural consequence is that judging intelligence is less like measuring horsepower in an engine and more like watching ballet. Speed and strength are part of intelligence, and these things can be measured numerically, but the essence of the activity is found in the rhythm and grace and personality — traits that are the products of an idiosyncratic blend of emotions, experiences, motivations and inheritances. This paragraph is quite confusing, perhaps due to the mixing of automotive and ballet metaphors. I think Brooks is trying to tell his readers he thinks personality is important for modern culture. I agree. And that has absolutely no bearing on the importance (or lack thereof) of cognitive ability in the same culture. Recent brain research, rather than reducing everything to electrical impulses and quantifiable pulses, actually enhances our appreciation of human complexity and richness. While psychometrics offered the false allure of objective fact, the new science brings us back into contact with literature, history and the humanities, and, ultimately, to the uniqueness of the individual. What? First, psychometrics (and specifically, the study of cognitive ability) has always held as paramount the uniqueness of the individual. Second, how has the study of cognitive ability NOT shown the complexity of humanity? Sir Cyril Burt, one of the pioneers in the field, was enamored with the complexity of students he encountered while a school psychologist in London. In fact, he was such an ardent supporter of psychological measurement so that he could begin to quantify, and, ultimately, understand and predict, this variability(see a bibliography here). More modern techniques, such as fMRIs, extend the work of psychometrics, in that they add to our ability to quantify individual variability at a much more precise level. However the two are quite complementary. From here: Despite the sometimes contentious controversy about whether intelligence can or should be measured, the array of neuroimaging studies reviewed here demonstrates that scores on many psychometrically-based measures of intellectual ability have robust correlates in brain structure and function. Moreover, the consistencies demonstrated among studies further undermine claims that intelligence testing has no empirical basis. In the world of academia, to have your ideas printed in a reputable journal, you have to go through the peer-review process. While there are arguments for the pros and cons of this process, at least it frequently squashes ill-informed, blatantly false propaganda from reaching the masses. After reading op-ed like this, one wishes the NYT had a similar mechanism in place. Labels: David Brooks, general intelligence, IQ |