Monday, December 31, 2007
Sunday, December 30, 2007
John Hawks, Bruce Lahn, and company have a fun paper in Trends in Genetics on the role of introgression of adaptive alleles from archaic Homo species in human evolution. The key point:
[A]n allele with a 5% advantage has a 10% chance of fixation. In fact, selected alleles in an exponentially growing population have a slightly higher probability of fixation, augmented by twice the intrinsic rate of growth. Because each copy of an introduced allele has this fixation probability, only a small number of matings between archaic and modern populations would ensure the eventual fixation of a large proportion of the adaptive archaic alleles. For example, for any and all advantageous archaic variants with s = 0.01, a 95% probability of fixation requires only 74 archaic-modern matings, each introducing a single copy of the allele into the modern human population. Widespread introgression of selected alleles would occur with a minimal level of interbreeding, which would leave a negligible effect on even large samples of neutral loci.There are, of course, assumptions in these sorts of caculations--perhaps most importantly, the fitness advantage of an introgressed allele would probably be canceled out or reversed by the effects of hybrid incompatibility loci for the first few generations-- but it certainly seems reasonable to assume that a number of alleles slipped through the species barrier.
The authors point to a number of candidate gene studies, and have some speculation about the types of alleles that might have been beneficial to the invasive Homo sapiens. To systematically find these alleles and be able to generalize about them, one needs large-scale genome sequencing. Luckily, this type of data is being generated, as I type, at genome centers across the world.
Since the previous post was about the tendency toward radical skepticism and subjectivism within cultural anthropology, I thought I would point to this piece in The Economist which highlights positive insights from various anthropological fields. The article emphasizes the possible role that population pressure and the quest for food might have had in spurring human innovation, from the atlatl to agriculture. An interesting point to note is the implicit suggestion that high rates of hunter-gatherer warfare might have constrained population pressure and possibly lead to relatively higher standards of living; something familiar from Greg Clark's model. From a population genetic angle, I am curious as to whether the endemic warfare of cultures which were pre-state resulted in higher or lower Nm*?
* N = population, m = migration rate. Nm > 1 results in equilibration of allele frequencies across demes, while Nm < 1 tends to lead to divergence.
Thursday, December 27, 2007
Steve points me to this George Johnson piece. Regular readers of this weblog know that we have had our differences with Jared Diamond. That being said, Diamond's ideas are clear & distinct, you can actually understand (and disagree) with what he is trying to say. A few years back when the Savage Minds weblog was getting into it with Diamond's defenders on the blogosphere one of the main issues seemed to be that it was hard to parse exactly what problem the cultural anthropologists had with Diamond besides the obvious perception from their camp that he was a racist (the post above GNXP authored was actually used to support that contention!). There are two distinct issues at work here, one general and the other rather specific.
First, some anthropologists, generally of a cultural or social bent, have become enamored of the same fashions which are rife within literary scholarship. One could use a catchall term like "Post Modernism" to describe these tendencies, though that's oversimplifying. Roughly, the flight to relativism and the acknowledgment of the subjectivity of scientific methods inevitable in the human sciences have been taken almost to a reductio ad absurdum by cultural anthropologists. The broader dynamic was one reason that Stanford's anthropology department was split in two, separating those who viewed their discipline as a science and those who took a more humanistic tack. In the latter case one could say that the goal is interpretation, not analysis, fine grained description as opposed to smoking out systematic general truths. The trend toward very specific description and disinclination to place the local in the general context leads to intellectual myopia. Imagine a riverine system where you have two groups of scholars. One group uses a method where a researcher takes a very deep core sample at one location. They examine that core and perfectly characterize the sedimentary structure on that location. The other group engages in a broad study of shallow cores and visual inspection across the whole system; they lack detailed specific knowledge but are attempting to sketch out the general dynamics of the system. Obviously there are strengths and weaknesses to both methods, and your needs and goals need to be kept in mind. The generalists will no doubt elide specific details, while those who pour over a specific deep core will accept a trade off between their detailed local knowledge and the broader framework.
And so it is when "thick description" partisans square off against general system-builders. General system-builders will usually be wrong, most theories do not stand up to the test of time, and the vast majority of hypotheses are false. Additionally, they will ignore local detail and over generalize so as to remove outliers from their model. This is not a bug, but a feature! Cultural anthropologists who jump upon inaccuracies in inferred detail (that is, they contend that the hypothesis does not hold in the case of their studied culture) seem to not consider that system-builders by the nature of their topic of study in the human sciences will offer up statistical truths, as opposed to apodictic ones. I suspect that this confusion is in part due to the fact that many cultural anthropologists seceded from the nation of social science just as statistical techniques became ubiquitous in validating assertions of truth. The problem with American cultural anthropology is not that it is not true, but that it can never be wrong! Where as they see the naked & plain error within Diamond's work as a mark of its folly, in truth it is simply the beauty of science that falsities are exposed for what they are. On occassion marginal deviations along the edges of a theoretical construct are even cleaned up in future iterations. Imagine that, scientific progress! Instead of rebutting Diamond's thesis with their own general system cultural anthropologists reject the whole project in its entirety. In the stinginess of their vision I must admit that they remind me of Michael Behe, who implies that what is not known or understood with any level of clarity in the present shall be incomprehensible in a naturalistic sense indefinitely by its very nature.
As for the specific problem with cultural anthropology, it is encapsulated in this quote from the piece above, "Diamond in effect argues that no one is to blame," said Deborah B. Gewertz, an anthropologist at Amherst College. "The haves are not to be blamed for the condition of the have-nots." Does the ethologist blame the sick Wildebeest which is killed by the lion? Does the conversation biologist blame the Dingo for likely having driven the Tasmanian Tiger to extinction? Or does the conservation biologist absolve the Dingo of blame because the arrival of Europeans would likely have heralded the Tiger's doom in any case? Does the particle physicist give thanks to CP violation for allowing the flourishing of our civilization? And so on. These are ridiculous queries because even though a wildlife biologist might, as a human, harbor an affection for the animals of their study, in the end they are animals to study. This sort of objectivity, or at least the attempt, seems anathema to some anthropologists who see themselves as activists and actors who are deeply engaged with the material basis of their scholarship. Despite the cultural anthropologists' rejection of general inferences from data they seem to have no great qualms in making general normative assertions derived from their own axiomatic value system.
As human beings we are likely cognitively biased toward viewing our own species as special. This crops up in taxonomy, where Carl Linnaeus placed us within our own genus though subsequent cladistic systematics implies that we form a monophyletic lineage with the other great apes. The Great Chain of Being suffused early evolutionary thinking, and even after our descent from pre-human primates was acknowledged our morphogenesis was conceived in a teleological light, we were the crown jewel of biological processes. The Modern Synthesis banished this sort of teleological thinking from evolutionary biology, killing the batch of orthogenetic theories which reigned supreme circa 1900. In the first half of the 20th century anthropology was an ideological discipline which also expressed a teleology, the evolution of human societies expressed a trend which culminated with the Europeans, anthropologists were an arm of the supremacist Zeitgeist in the West. The Nazi abomination showed anthropologists that such activism was illegitimate. But instead of turning from activism and ideological pursuits anthropology simply inverted itself, it became a handmaid of the counter-cultural elite, pushing relativism and lack of positive assertion as virtues except in their rejection of the West and a general suspicion of the culture of European man. The disaster of racial science as the handmaid to the racial state did not draw anthropologists to the conclusion that aspiration toward objectivity should be their goal; rather, they switched sides en masse and hitched their wagon to the cultural winners in the academy.
Though this secured their place in the humanities departments, it also made them a laughing stock in the eyes of other scientists. Here was what L. L. Cavalli-Sforza stated when I interviewed him:
I entirely agree that the average quality of anthropological research, especially of the cultural type, is kept extremely low by lack of statistical knowledge and of hypothetical deductive methodology. At the moment there is no indication that the majority of cultural anthropologists accept science - the most vocal of them still choose to deny that anthropology is science. They are certainly correct for what regards most of their work.
Anyone who is familiar with Cavalli-Sforza knows he is a humanist; he has a passion for humanity and wishes to understand our species to the best of his ability. It is clear that he does not perceive that cultural anthropologists share the same passion for understanding, as opposed to their own admittedly subjective interpretations. The evolutionary geneticist James F. Crow stated upon controversial research on human evolution & behavior:
I hope that such questions can be approached with the same objectivity as that when we study inheritance of bristle number in Drosophila, but I don't expect it soon. There are too many strongly held opinions. I thought Lahn had a clever idea in thinking that the normal alleles of head-reducing mutants might be responsible for evolution of larger heads in human ancestry. Likewise, I think that Cochran et al. are fully entitled to consider the reasons for Jewish intelligence and I found their arguments interesting. In my view it is wrong to say that research in this area -- assuming it is well done -- is out of order. I feel srongly that we should not discourage a line of research because someone might not like a possible outcome.
Is man but a fly? Why not? I can give you my ethical and moral rationales for why man is not a fly in an ontological sense, but scientifically we are of the same essence, the same atomic units, many of the same genetic switches, and so forth. The insight that man is an animal was one Charles Darwin popularized in the 19th century, but cultural anthropologists reject this truth because they reject all truths except the ones they feel privileged to assert from their perches as conscious and enlightened folk (but is not being enlightened itself an expression of a hegemonic mindset?). It is difficult to take a system of scholarship which seems to promote obscurity and subjectivity as goods seriously. Study of human societies is more difficult than breaking down a molecular genetic pathway; but that is no excuse to give up the quest for clarity, precision and prediction. We're a complex species, and there are many contingent variables which clog up any system. But I see no reason that that justifies reading societies like a work of fiction; presenting arguments as clever word games which rise and fall based on prose opacity and the fads of the day. Cultural anthropology's adherence to critique is not the problem, criticism is a necessary antidote to sloppy thinking, rather it is its promotion of critique as the sin qua non of the discipline and insulation from falsification by saying nothing positive at all. They should leave criticisms of Jard Diamond's grand system of the world to those who actually believe that such activities are not scandalous in the first place!
Wednesday, December 26, 2007
Received this email:
It appears your website has been compromised. When visiting https://gnxp.com (as opposed to regular http) Firefox prompted me with a message that the security certificate for snakeoil.dom has expired. After some googling I found out it is likely an authentication certificate for a virus.
I didn't have the same problem. I'm in a hurry, but I assume this is a client side issue? There isn't an SSL certificate for this website.
Update: See this.
Monday, December 24, 2007
Absence of contagious yawning in children with autism spectrum disorder:
This study is the first to report the disturbance of contagious yawning in individuals with autism spectrum disorder (ASD). Twenty-four children with ASD as well as 25 age-matched typically developing (TD) children observed video clips of either yawning or control mouth movements. Yawning video clips elicited more yawns in TD children than in children with ASD, but the frequency of yawns did not differ between groups when they observed control video clips. Moreover, TD children yawned more during or after the yawn video clips than the control video clips, but the type of video clips did not affect the amount of yawning in children with ASD. Current results suggest that contagious yawning is impaired in ASD, which may relate to their impairment in empathy. It supports the claim that contagious yawning is based on the capacity for empathy.
Someone should do behavioral economics studies on groups of autistic individuals. Would surely validate the mid-20th century microeconomic consensus.
Sunday, December 23, 2007
Would it be possible that hairlessness and skin color evolved to allow fathers to identify their direct biological offspring, so as not to have to support another male's genes. In many other species, when a new male/group of males takes over they kill the existing pups, so there seems to be pretty good evidence for parents to prefer their direct offspring to the offspring of others. Now it's difficult to say that this is natural selection as there's no particular reason to think that the existing brood has "less fit" genes than the new brood would provide. It would also be interesting to see whether or not there is greater morphological variation in appearance in humans than in other great apes. This would seemingly bolster the argument that much of human culture evolved around allowing fathers to identify their specific offspring and prefer them over other male's offspring. Men would tend to prefer women with greater morphological variation and hairlessness because these women's children would tend to exhibit features that would allow the male to more readily identify parentage. I have done a fair amount of googling and have not been able to find anyone else making this hypothesis but I would be happy if someone could provide any insight into prior references to this sort of theory, namely that hairlessness was sexually selected so that fathers could identify their specific biological offspring.
The one hypothesis posited by Judith Rich Harris seems to assert post-modernist sounding psychological factors; her "us hairless/them hairy" sounds like something straight out of post-colonialist theory or the Frankfurt School. If we are going to seriously consider the role of genetics and evolutionary history in current human affairs we need to prefer strict genetic causes over psychological-ideal "causes", which are really manifestations of the ghost-in-the-machine mentality.
My interest in genetics comes from my interest in ethical philosophy and economics, so I'll admit up-front that I have no expertise here and I'll ask for your indulgence. As I understand it sexual selection for dimorphism takes a long time. However, traits that are sexually monomorphic can manifest relatively quickly in a population. Significant selective pressure for less hairy females would result in offspring of both sexes exhibiting lower levels of fur over probably a relatively short period of time. I just wanted to add that last part because I want to state up-front that I have no specific expertise in this area.
If there is one thing that I believe that I cannot yet demonstrate it's that much of varieties of human life and culture evolved to allow human beings to identify their kin and prefer those kin to non-kin.
A final note: most people agree that sexual selection takes place on the male which is why males in almost every attribute exhibit. This is true. But imagine a situation where a particular region has developed high rates of pair-bonding and males and females raise their offspring as a pair. Males might not be willing to care for a child that they could not specifically recognize as their own, therefore, the female would continue eliminating children until she produced an individual child for which a specific male would/could identify as his own and take responsibility. So, what I am doing is taking a cue from Harris and combining parental selection as a cognitively complex variation on sexual selection; this is kin selection plus sexual selection.
P.S. this is my first post so be gentle
I'm sure many of you will be busy getting to your "destinations" in the next day or two before Christmas. Hope all goes well. Also, my mother informed me it was Eid ul-Adha a few days ago, so best wishes to the Muslim readers (when I was a kid I didn't know they were two Eid festivals, just assumed that the lunar calender was wack with frequency). Since I'm partial to Greeks, that's all I'll say for winter holidays this year. Below the fold, some Christmas music....
Saturday, December 22, 2007
The New York Times has a story, Where Boys Were Kings, a Shift Toward Baby Girls:
...South Korea is the first of several Asian countries with large sex imbalances at birth to reverse the trend, moving toward greater parity between the sexes. Last year, the ratio was 107.4 boys born for every 100 girls, still above what is considered normal, but down from a peak of 116.5 boys born for every 100 girls in 1990.
Please note that the "normal" sex ratio is usually skewed somewhat toward males, around 105 to 100 (the explanation I received about this is that sperm carrying the Y are faster because they are smaller, I appreciate anyone to falsifying this if they know the "true story"). But I also found it peculiar that the article did not note that another East Asian society has switched from son to daughter preference in the past few decades, Japan. The moral of this story is, I think, that economic and social development are more critical in shaping these trends than laws enacted from on high. Japan developed earlier than South Korea, and the change in societal attitudes on this issue occurred earlier.
As background to a couple previous posts where I made somewhat technical comments about simulations in population genetics, I was in the middle of writing a rather lengthy primer on coalescent theory. Then I saw that RPM has an old post pointing to some of the same material I was planning on hitting. So instead I'll just say read his post and the links therein. I may get around to finishing what I was writing (there's a bit of math that most people don't care to see, so maybe not), but you can essentially boil it down to RPM's last point:
Without a null model based on the coalescent, there is no way to statistically test hypotheses that are based on DNA data, regarding things like population structure.Or natural selection.
Labels: Population genetics
Nature has a commentary on the use of "cognitive-enhancing drugs" in healthy and "diseased" individuals. They opine:
The debate over cognitive-enhancing drugs must also consider the expected magnitude of the benefits and weigh them against the risks and side effects of each drug. Most readers would not consider that having a double shot of espresso or a soft drink containing caffeine would confer an unfair advantage at work. The use of caffeine to enhance concentration is commonplace, despite having side effects in at least some individuals. Often overlooked in media reports on cognitive enhancers is the fact that many of the effects in healthy individuals are transient and small-to-moderate in size. Just as one would hardly propose that a strong cup of coffee could be the secret of academic achievement or faster career advancement, the use of such drugs does not necessarily entail cheating.There seems to be some ingrained human reaction against what is considered "cheating" (which is one those things whose obvious definition to one person might be considered ridiculous by someone else); this would be the major barrier these drugs will have to overcome in order to being commonplace.
Shelly Batts at Retrospectacle asks the question I was thinking when I read this article:
A commentary today in Nature, by Sahakian and Morein-Zamir, poses the question: if you could take a pill which enhanced attention and cognition with few or no side effects, would you?Frankly, I do enjoy having my "attention enhancers" in the widely-used and socially acceptable beverage form, but I can imagine a world where it's socially acceptable to pop an IQ pill, and it doesn't seem that bad to me at all.
Thursday, December 20, 2007
So says the European Journal of Human Genetics, in response to the flood of data from genome-wide association studies and other genomic data in the field of human genetics:
[O]ne might maliciously wonder if we are not (temporarily, in this field and pending subsequent functional studies) close to the ultimate consumption date of the Popperian approach of hypothesis-driven research. For was not a main goal of this to unravel the truth in the most efficient, that is, plausible way, faced with a daunting scarcity of collectible data? Well, if it becomes cheaper to just collect all data required than to run after a hundred consecutive, plausible, but wrong hypotheses, starting with a hypothesis becomes an economic futility. The hypothesis as a guiding principle is then replaced by a truism: if one does not throw away anything before thoroughly assessing its irrelevance, one will always find what one is looking for.
Wednesday, December 19, 2007
More 'altruistic' punishment in larger societies:
...Second-party punishment is when you punish someone who defected on you; third-party punishment is when you punish someone who defected on someone else. Third-party punishment is an effective way to enforce the norms of strong reciprocity and promote cooperation. Here we present new results that expand on a previous report from a large cross-cultural project. This project has already shown that there is considerable cross-cultural variation in punishment and cooperation. Here we test the hypothesis that population size (and complexity) predicts the level of third-party punishment. Our results show that people in larger, more complex societies engage in significantly more third-party punishment than people in small-scale societies.
Labels: Behavior Genetics
Saturday, December 15, 2007
In a previous post, I made the case that the evidence in Hawks et al. (2007)[pdf] should not convince you that human adaptive evolution is accelerating. This is a follow-up (again fairly technical) to that post. Again, I'll reiterate that I find the theory large convincing. If that's all you want to hear, don't keep reading. Otherwise, below the fold I have some additional comments and respond to John Hawks's answers to my critiques.
1. It has been pointed out that the test for selection used in Hawks et al. appears to have been used on all the individuals in the HapMap. People familiar with the HapMap will know that the European and African samples are in 30 trios--ie. two parents and one child. This provides excellent accuracy for phasing the parents, however there are only 60 independent individuals per population. The genotypes of the children are simply reshufflings of the parents. Both Wang et al. and Hawks et al. refer to "90" individuals in both the European and African populations. If it is true (as it appears to be, though I'm sure I'll be corrected if it's not) that all 90 individuals were used in the analysis, this is potentially a major problem. Think about it this way-- the test for selection is based on linkage disequilibrium structure, which is the correlation between alleles at nearby loci. Now if you include related individuals, you introduce correlation simply due to the fact that 1/3 of the individuals are rearrangements of the other two-thirds. Allele frequencies, for similar reasons, are also obviously biased. I'm not sure exactly how this would affect the results, but it's a highly non-standard analysis, and the burden of proof is on the authors to test whether it's legitimate. I have my doubts, and find it quite plausible that many (most?) of the "selection events" detected in this type of analysis are not selection at all, but rather something having to do with the structured nature of the population.
2. Popgen Ramblings has a nice post explaining how one could provide support for the acceleration hypothesis through simulations. I agree. For example, let's look at Figure 3 from Hawks et al. This figure purports to show the expected age distribution of selected variants under the null hypothesis of a constant rate of adaptive evolution and under the alternative of an acceleration. Clearly, the "true" age distribution looks much more like the distribution expected under acceleration. But how realistic is that null distribution? That is, one could simulate, under certain demographic parameters, a fixed number of selected alleles arising 80000 years ago, 70000 years ago, etc, up to the present day, conditioning on the present allele frequency being in the frequency range of the LDD statistic. If one were to plot the fraction of those selective events that are detected as a function of the age, that would be something of an approximation to a real null distribution. And what would that distribution look like? Well, no one can know until it's actually done, but I'm a betting man, and I'd wager large sums of money that it would look a lot like the "alternative" hypothesis (the "demographic model") shown in this figure.
3. Some excerpts from John Hawks's response to my previous comments are in italics, followed by my thoughts:
we won't detect just any recent things -- in fact, we will not be able to detect recent things that are weakly selected. By contrast, we should detect older things that are weakly selected, but we will never detect older things that were strongly selected -- they're the ones that are fixed now.
The part I've bolded has not been demonstrated, and I find it unlikely to be true. Remember, LD decays with time, so there should be little signal around old selected variants. Again, simulations could address this.
In theory, strongly selected mutations ought to be vanishingly rare. In fact, they ought to be exponentially rarer than weakly selected mutations. That doesn't mean the theory has to be right, but it does mean we need some kind of explanation if we find that weakly selected things are rare, and strongly selected ones are common -- I mean, R. A. Fisher was wrong sometimes, but I'm not going out on a limb on this one.
Acceleration can explain this reversal
A more parsimonious explanation for this "reversal" is again statistical power. Statistical power absolutely, obviously varies with selection coefficient-- this test is going to detect things that have been strongly selected (if it detects selection at all; see above), and not things that have been weakly selected. So even if the age distribution of selected alleles isn't a statistical artefact, this "reversal" clearly is (though I suppose I could be proved wrong, again with simulations).
Strikingly, we found that increasing the SNP density in the new HapMap made very little difference to the number of selected variants estimated for the CEU sample -- we believe this is because we are finding basically everything there for the method to find. This leaves significant limits -- for instance, the limited frequency window we used. But we don't think we are missing lots of selection in high-recombination regions.
The reason that SNP density made little difference in the CEU population is that there is extensive LD in that population, and the phase I data were sufficient to characterize that LD. The test takes LD as a parameter, so if you already had a good estimation of this parameter, increased information doesn't help. The inference that this thus means the test isn't missing selection in high-recombination regions simply does not follow--that is a property of the test statistic that has not been demonstrated. One could simulate fully ascertained data in regions of varying recombination rate and test this. To my knowledge, this has not been done.
Recent genetic drift including founder effects would affect all genomic regions equally, but the candidate selected genes occur predominantly in genic regions, and preferentially include genes in functional classes that are plausible targets for recent adaptive changes. Selection is the only explanation consistent with all these features.
It is well-known that different functional classes of genes (and different parts of the genome) vary systematically in recombination rate, LD structure, gene length, and many other metrics. A change in power along one of these axes could equally lead to this observation, without the need to invoke natural selection. This alone is not evidence that the test is detecting anything (though it's true it provides some evidence).
In other words, our tests of acceleration do not depend very finely on the ascertainment of these alleles
Like I said, I find the theory solid. However, the test for acceleration does depend on the ascertainment of these alleles to a certain extent. Neither I nor John Hawks has any idea if 5%, 50% or 0% of these "selective events" are real. This is a problem.
Friday, December 14, 2007
The Sassanian Empire this week on In Our Time. Kind of obscure, so worth it. Speaking of obscurity, some reading on the dynamics of Islamicization in Iran (Conversion to Islam) revealed the fact that there was a strong tendency for new Persian converts and their offspring to use very Arabic names during the first centuries, specifically ones associated with early Muslims. While Arab Muslims themselves might on occasion have had names which might also have been used by Jews or Christians (e.g., Arabic forms of David), Persian converts were underrepresented in these "ambiguous" variants, rather their names signified that they had to be Muslim. But as the proportion of Iran's population which was Muslim increased (going from minority to majority sometime in the 10th century), there was a modest bounce back of pre-Islamic Persian names among the elites. The argument goes that only with the indigenization of Islam within Persian culture were Iranian forms and elements allowed to make an explicit come back, since they no longer posed any threat as an alternative (there were principalities where the rulers still championed Zoroastrianism in regions such as the southern shore of the Caspian Sea as late as the 9th century). This of course neglects the elephant in the room that the early Caliphs seem to have transplanted Sassanian court motifs in toto to generate the aura around their monarchy. Additionally, I'm skeptical of the generality of this claim, the first Byzantine Emperor with a Hebraic name was Michael I, four centuries after public paganism had been definitively marginalized.
Thursday, December 13, 2007
A few weeks ago p-ter posted on the fact that a gene that is implicated in blondeness in humans, KITLG, has a binding partner, KIT, within a similar affect in horses. There's a new paper out which I blog about here that shows that KITLG has a major affect on pigmentation in stickleback fish as well as humans, specifically showing a a partially dominant skin-lightening effect in African Americans in an admixture study. So like OCA2 this is now plausibly a case where selection for skin color could have driven secondary changes in phenotype (hair color). This makes more evolutionary sense since blonde hair is considered to be recessive, and so at a great selective disadvantage at low frequencies. In contrast, if skin-lightening is partially dominant it will be strongly exposed to selection (I'm skeptical of the dominance, they admit that more work needs to be done, but additivity has the same, less marked, advantage over recessivity). Note that KITLG shows up in tests of selection for East Asians too. You can find details for KITLG in this paper, Signatures of Positive Selection in Genes Associated with Human Skin Pigmentation as Revealed from Analyses of Single Nucleotide Polymorphisms, and showed up in Localizing Recent Adaptive Evolution in the Human Genome too. Note that the most recent paper, cis-Regulatory Changes in Kit Ligand Expression and Parallel Evolution of Pigmentation in Sticklebacks and Humans, is open access.
The UK Times today has a report on new research into trends in social mobility and the effect of education and social class. The research finds that social mobility declined between 1958 and 1970, and has not improved since then (boo!). But the Times focuses on a peripheral part of the research, which looks at a recent cohort of young children tested on a cognitive ability scale (actually vocabulary) at age 3 and 5. According to the Times article:
The authors said: "Those from the poorest fifth of households, but in the brightest group at age 3, drop from the 88th percentile on cognitive tests at 3 to the 65th percentile at age 5. Those from the richest households who are least able at age 3 move up from the 15th percentile to the 45th percentile by age 5. If this trend continued these children from affluent backgrounds would be likely to overtake the poorer, but initially bright, children in test scores by age 7."
The article also contains a graph to illustrate this point, which unfortunately is not included in the online version. However, it is evidently based on Figure 4 in the original research report, available as a pdf file here.
Now, take a look at Figure 4, and don't all shout at once: regression towards the mean! The graph is almost a textbook illustration of what we would expect to find if we took test data from the extremes of two groups with different mean performance and then retested them at a later date.
This doesn't prove that this is simply a case of regression, but it is an obvious possibility which needs to be examined. As far as I can see (correct me if I am wrong) the authors do not consider it, but it is hard to tell, as the report is written in statisticese, a language only distantly related to English.
Wednesday, December 12, 2007
The Office for National Statistics in the UK has released a report on fertility rates and population trends, including a breakdown by ethnic groups. Here is a report in the Times.
I had some trouble finding the relevant material on the ONS website, so to save our readers the trouble, here is the main report (1Mb pdf file) and here is the ONS Press Release (short pdf file).
I haven't read the full report myself yet, but at first glance there isn't anything very unexpected. The upturn in fertility rates for British-born women doesn't surprise me, as I have been harping for some time on this theme. The continuing high fertility of Pakistan and Bangladesh-born women (compared both to indigenous Britons and immigrant groups from elsewhere) will naturally attract attention. The ONS says there is evidence that the differential is narrowing, but I doubt that it will be closed any time soon.
Added on 17 December
It was careless of me not to think of one very major factor relevant to fertility rates, namely the proportion of women who work. I don't have the latest figures, but in 2002 the proportions of women from different ethnic groups who were not in the labour market (i.e. not working or seeking work) were roughly as follows (ONS data from the Labour Force Survey):
Black Caribbean 22%
Black African 30%
What stands out here is the huge proportion of Pakistani and Bangladeshi women who are not in the labour market - more than twice as many as in any other group. I am sure there is a cultural/religious factor involved here: women who work are likely to mix with people from other religions, and even (horror of horrors) with men who are not their relatives or husbands. Much better just to stay at home and have babies.
Monday, December 10, 2007
The long-awaited "acceleration paper" from John Hawks and others has finally been published in PNAS. The claim is that humans are experiencing a burst of adaptive evolution, and the basic argument is deceptively simple: the recent increase in human population size has led, through an increased number of beneficial new mutations and an increased probability of fixation of said beneficial new mutations, to an acceleration in the rate of adaptive change in our species. The argument is motivated by population genetic theory (see Razib's summary of that theory here); here the authors look for genomic evidence.
If you find the theoretical argument convincing (as I do), then it's easy to accept their major conclusion. However, if you don't find the theoretical argument convincing, the evidence presented in this paper should not convince you. Below the fold, I discuss why.
Hawks et al. have assembled a dataset of what they call "ascertained selected variants", and apply standard methods to estimate the ages of these selection events. In their figure 1, you'll see the age distribution they infer--one that is heavily biased towards the present. The assembly of this database is the key part of their analysis, the one most likely to lead them astray, and it is left woefully underexplored.
The method used to identify selected variants was initially introduced in the context of a genome-wide scan for natural selection in the HapMap. The statistic is based around the haplotype length surrounding an allele, and they take the top 0.5% of the scores in the distribution of this statistic as selected. There are many questions about this method that are relevant to any downstream analyses-- for example, what is the false positive rate of the test? The false negative rate? Do these things depend on allele frequency or recombination rate? Most importantly, perhaps: is the statistical power to detect alleles of different ages identical? If the test has low power to detect old sweeps and good power to detect recent ones, there you go-- an artefactual acceleration. There is little discussion of these parameters in the original paper and less here. Don't get me wrong-- the areas of the genome they identify as selected are almost certainly enriched for true selective events, but how enriched? These questions are perhaps less important in a first-pass scan for selection, but if you're going to make generalizations about selected sites, they're essential. The claim in the paper that "demographic causes of extensive LD can be discriminated easily from those caused by adaptive selection" is not demonstrated (and is false).
The authors seem to be partially aware of this, writing "this finding [that few new sweeps are discovered with increased SNP density] indicates that most events (defined by the LDD test) coalescing up to 80,000 years ago have been detected [my emphasis]". In footnote 2, I point to a paper showing that the LDD test biases itself towards identifying sweeps in regions of the genome with a low recombination rate. Is there any reason to think that there's a similar bias with regards to allele age? In a word: yes. Consider how they're detecting selection-- they look for alleles that are at high frequency (but not fixed) and have extensive LD around them. Since haplotype length decreases with time, by definition, these are young alleles. Any old allele with a strong selection coefficient has long gone to fixation and is not detected (or more carefully, there's much less power to detect it).
The claims about the predictions of the null hypothesis of no acceleration are largely irrelevant given the above. Consider the prediction about the number of adaptive substitutions--that is, the authors claim that the number of selective events in the data predict an absurd number of adaptive substitutions between humans and chimpanzees. Now remember how these variants were identified-- they're in the 99.5th percentile of the distribution of their test statistic. Every distribution has a tail, so if they were to move their threshold a bit further to the right, surely they'd be able to narrow down the number of regions to something consistent with a constant rate. That is, the entire argument is predicated on perfectly identifying selection in the regions of the parameter space they search. This is a major assumption, and not one I'm willing to make without strong evidence. They provide none.
Note that the information about selection is in haplotype structure, but the test uses unphased genotype data. Their claim that this is actually not a bug, but a feature, because of the computational intensity needed to phase genotype data is unconvincing-- the phased haplotypes from the HapMap are freely available for download.
A recent review found that the recombination rate in regions the authors identified as being under selection is a full one-fifth of the genome-wide average. Unless you have some reason to believe that there's more selection in regions of the genome with low recombination rates (you don't), that's strong evidence that there's massive ascertainment bias at work here, at least along one dimension in parameter space.
 The simulations presented in the original paper presenting the method are highly questionable--they do a bootstrap-like resampling scheme from the data, which treats each site as if it were independent. In real data (selected or otherwise), nearby sites have a shared geneology, which is important to capture in any null model. There are widely-used programs available for simulating such samples; these should have been used.
Reading The Early Chinese Empires: Qin and Han, and I was struck by passages extracted from The Book of Lord Shang, a work which supposedly encapsulates the thought of the Legalist philosopher & statesmen Shang Yang. In short, Shang Yang extols the virtues of the Malthusian Trap in perpetuating a stable & well ordered state! He believes that surplus only encourages mischief and social disorder, and any excess production should be burned away by wars of attrition so that the population is driven back to the margins of subsistence. Any surprise that Legalism was anathema for most of Chinese history despite the reality that the Qin state and its ruling philosophy had a significant practical impact upon how the Imperial system was structured?
Saturday, December 08, 2007
A few recent evolution/genetics-related posts worth reading:
1. Jonathan Eisen on the fascinating story of the metabolic symbiosis going on in the cells of the glassy-winged sharpshooter.
2. Evolgen on the evolution of sexually antagonistic genes.
3. Popgen Ramblings on the genetics of the sex ratio.
Friday, December 07, 2007
From Science: Genetically Determined Differences in Learning from Errors:
The role of dopamine in monitoring negative action outcomes and feedback-based learning was tested in a neuroimaging study in humans grouped according to the dopamine D2 receptor gene polymorphism DRD2-TAQ-IA. In a probabilistic learning task, A1-allele carriers with reduced dopamine D2 receptor densities learned to avoid actions with negative consequences less efficiently.I've mentioned before how genomic data in both model organisms and humans is going to break open the field of behavioral genetics. This, however, is old-school genetics (though with the neuroimaging twist)-- they have a single polymorphism which they believe to be functional, and they test it against their phenotype.
It's interesting if true, but the standards for claiming a genetic association have gone up since people got excited multiple times in the early 90s over false positives, and here, the small sample size and lack of replication make it easy to be skeptical. Apparently, many people are--here's Neil Risch:
Geneticist Neil Risch of the University of California, San Francisco, adds that this allele "has been a candidate gene for every imaginable psychiatric phenotype for 18 years now, and to my knowledge none of the originally reported associations has held up."But still, one can imagine this study with 100X more people and genome-wide SNP chips. In a couple years, that won't be difficult.
Thursday, December 06, 2007
Language tensions mount in bilingual Finland:
"Finland tries to teach everyone a lesson about morality but minorities in China are treated better," blasted Juhan Janhunen, an expert on Asian languages, comparing one of the most egalitarian countries in the world to the Communist regime.
Heikki Tala, the head of the Association for Finnish Culture and Identity, doesn't see a problem.
Labels: Finn baiting
Wednesday, December 05, 2007
James R. Flynn is a philosopher and psychologist at the University of Otago in New Zealand, as well as Distinguished Associate of the Psychometrics Centre at Cambridge University. His best-known paper, "Massive IQ Gains in 14 Nations," (Psych. Bulletin, 1987), documented what Herrnstein and Murray later called the "Flynn Effect": A long term increase in average IQ's across the developed world. This widely-reaffirmed result contradicted the folk wisdom that a coarsened culture and dysgenic fertility were making the rich nations less intelligent. In his new book, "What is Intelligence? Beyond the Flynn Effect," (Cambridge University Press), he argues that changing social and economic forces can explain both the Flynn Effect and group differences in IQ. To fully understand the Flynn Effect, he contends, we need to understand the "cognitive history" of the 20th century. Perhaps most importantly, he proposes a variety of practical empirical tests so that one can see whether his explanations are correct.
The author of four books and dozens of articles in the fields of moral philosophy and psychology, Professor Flynn has repeatedly spurred psychologists to rethink exactly what it is that intelligence tests measure.
1. In your new book, What is Intelligence? Beyond the Flynn Effect, you emphasize that IQ research is so focused on g, the general factor of intelligence, that they've been unable to see other important features in the IQ data. In particular, the "g-men," as you call them, seem to think that if the Flynn Effect is an overall increase in all IQ subtests, or an overall increase in a random subset of IQ subtests, then they can just ignore the Flynn Effect completely. So, what are the g-men missing out on?
Over time, changing social priorities alter the cognitive demands made on our minds. For example, society may want more and more people to put on scientific spectacles so they can understand the world rationally through education. IQ tests like Similarities and Raven's pick this up as enhanced performance. Yet, thanks to a more visual culture, society may not require us to enlarge our vocabularies - meaning no higher scores on the WISC vocabulary subtest. These trends are of great significance. If you dismiss these trends because they do not tally with the various tests' g-loadings, you miss all of that. G rather than social significance has become your criterion of what is important.
2. Over the decades, you've carried on an extensive correspondence with Arthur Jensen, the controversial and enormously influential intelligence researcher at UC Berkeley. You summarized some of your early thoughts about Jensen's work in your 1980 book Race, IQ, and Jensen, a book that, in my opinion, sets the standard for how do discuss this controversial topic. What have you learned about Jensen over the years, and what have your interactions with him taught you about the nature of scientific research?
I never suspected Arthur Jensen of racial bias. Over the years, I have found him scrupulous in terms of professional ethics. He has never denied me access to his unpublished data. His work stands as an example of what John Stuart Mill meant when he said that being challenged in a way that is "upsetting" is to be welcomed not discouraged. Before Jensen, the notion that all races were genetically equal for cognitive ability had become a dead "Sunday truth" for which we could give no good reasons. Today we are infinitely more informed about group differences. Equally important, the debates Jensen began are revolutionizing the theory of intelligence and our understanding of how genes and environment interact.
3. In an earlier book, Asian Americans: Achievement Beyond IQ, you contended that Asians appeared to do just as well as Whites on IQ tests-no worse or no better, with the possible exception of some narrow visuospatial abilities. You showed, in fact, that a lot of the apparent high Asian IQ scores were driven by the Flynn Effect. Since then, a number of studies catalogued by Lynn and Vanhanen seem to reinforce the conventional wisdom that Asians are usually doing better than Whites on IQ tests. Are you still convinced that there's no substantial difference in average IQ between whites and Asians, and if so, what's wrong with the recent data?
The Chinese Americans I studied were the generation born in 1945-1949. They were no higher than whites even for non-verbal IQ yet out-performed whites by a huge margin in terms of eventual occupational status. That meant that they could give their own children the kind of privileged environment they had never had. The result was a pattern of IQ that put the subsequent generation of Chinese Americans at an IQ of 109 at say age six gradually falling to 103 by the late teens, as parental influence faded away in favor of peers. The extra 3 points the present generation has as adults is due to the fact that they are in cognitively more demanding universities and professions and because they have internalized a positive attitude to cognitively challenging activities and companions.
4. At least at first glance, reading comprehension appears to involve a high degree of abstraction. If, as you argue in your new book, the Flynn Effect is largely driven by an exogenous rise in abstract thinking, then why hasn't the reading comprehension score increased by very much?
The Comprehension subtest of the WISC does show significant gains, though not nearly as great as Similarities and Raven's. But it is not a test of reading comprehension but a test of perceiving the "logic" of social arrangements - for example, why streets are numbered in order. The reading tests of the Nation's Report Card show no gain at age 17 because you are expected to read adult novels. Since young people today have no larger vocabularies and funds of general information than their ancestors did, they cannot read these works with any greater understanding.
5. In What is Intelligence?, you discuss the importance of "Short Hand Abstractions" or "SHAs" as part of an educated person's mental toolkit. What are they and how do they relate to your intelligence research?
IQ tests have missed a striking cognitive development of the 20th century, namely, that the various sciences and philosophy have enriched our minds by gradually giving educated people short-hand abstractions (SHAs) that allow us to critically analyze our world. For example, the word "market" no longer stands for a place but for the law of supply and demand and you can use it to see why rent controls are self defeating. The concept of "tautology" can make us more sophisticated about history. If someone says "Christianity has been a force for good", and explains away all the slaughter Christians have perpetrated by saying that they "were not real Christians", we can immediately see the flaw. If only good people qualify as Christians, the goodness of Christians has been established by definition! Sadly universities never give their graduates a full tool kit of these wonderful analytic concepts.
6. Recently, some IQ researchers have argued that if the Flynn Effect is g-loaded, then we should see a fall in the factor loadings across subtests over time. Their story is that cross-sectionally, we know that people with high IQ scores have more specificityâ€“that is, they have greater strengths and weaknesses relative to the average person. Do you place much weight on that hypothesis, and do you think it might explain why IQ gains over time are distributed the way they are?
The IQ gains are not g-loaded so the prediction is beside the point. The importance of cognitive trends over time is a matter of their social utility. Whether they happen to be greatest on skills that have the highest g-loading is a distraction.
7. The Dickens-Flynn model (Psych. Review, 2001) attempts to explain the apparent high heritability of IQ by arguing that people with good genes end up endogenously in good environments, which in turn raises their IQs even more. In your new book, you propose a number of ways to test this hypothesis. Do you think that the Dickens-Flynn model is all that's needed to explain differences in average IQ across ethnic groups, or do you think that other explanations might be needed?
The Dickens-Flynn model does nothing to evidence that IQ gaps between groups are environmental rather than genetic in origin. That evidence must come from specific environmental hypotheses about what handicaps (say) black Americans suffer as they age. What the model shows is that twin studies (which emphasize the effects of genetic differences between individuals) do nothing to prejudice an environmental explanation of group differences.
8. Out of the many research designs you propose in What is Intelligence, which one would you most like to see performed and why?
The one that calls for investigation of urban and rural Brazil. I think the former approximates where Americans are today, and the latter approximates where Americans were in 1900. We could get direct evidence for or against the cognitive history of Americans in the 20th century that my book relates.
9. You've long said that you disagree with Richard Lynn's view that the Flynn Effect is largely driven by better nutrition. One of Lynn's pieces of evidence is that IQ gains show up at very early ages, which would be surprising if the Flynn Effect were entirely sociological. Why do you think IQ gains show up at such an early age, and about what fraction of IQ gains do you think might be due to nutrition?
Changing ratios of adults to children in the home (smaller families) and changed modes of dealing with infants affect cognitive development from birth. The nutrition hypothesis explains little in America since 1950 - the evidence is in the book.
10. You've shaken up the field of intelligence research every time you've published a book on the topic. What are you working on for your next project?
My next book is in press. It will be called: The hollow center: race, class, and ideas in America. It will attempt to shake Americans into awareness that they are blind to the state of black America, that their foreign and domestic policies have perverse priorities, that they are class blind, and have lost their way it terms of Jefferson's humane ideals. It is, however, a hopeful book in the sense that there is much in America's history that can show us how to find our way.
In economics, a rule of thumb is that an academic article that largely agrees with Herrnstein and Murray's The Bell Curve must start off by attacking The Bell Curve--maybe it's just a way to get past peer review, maybe it's a way of keeping your status in the academic community, maybe it's because they didn't understand or read H&M, could be all of the above.
The same is now apparently true with discussions of women in science: When arguing that the link between gender and scientific abilities is subtle and complex, it's apparently mandatory to attack Larry Summers for being simplistic, even though he himself noted that the relationship was very likely to be subtle and complex.
Latest example: Scientific American.
If Larry Summers's comments had one appealing feature, it was the benefit of simplicity...however, the truth is not so simple.
The multiple authors at SciAm march through the various hypotheses: work expectations, biology, glass ceilings. And they're honest enough to point out that there's some evidence for all three hypotheses--and they even note the apparent role of pre-natal sex hormones in shaping brain development. Just as you'd expect, they find at least tentative evidence for all three stories.
But it looks like the SciAm folks didn't even bother to look at Summers's own remarks. If they had, they would have realized that Summers himself could have written the outline for their article (emphasis added):
There are three broad hypotheses about the sources of the very substantial disparities that this conference's papers document and have been documented before with respect to the presence of women in high-end scientific professions. One is what I would call the-I'll explain each of these in a few moments and comment on how important I think they are-the first is what I call the high-powered job hypothesis. The second is what I would call different availability of aptitude at the high end, and the third is what I would call different socialization and patterns of discrimination in a search. And in my own view, their importance probably ranks in exactly the order that I just described.
Work expectations, biology, glass ceilings. Yes, Summers weighs the alternatives differently than the SciAm folks, and yes, I'm conflating some issues in this short blog post, but you can read the articles yourself to double check the subtleties (e.g., the separate discussions of "abilities" and "biology."). But the main point is that there's nothing "simple" about Summers's story.
Just at Jason noted about the press's treatment of James Watson, so too with Summers: The popular scientific press rarely let the facts get in the way of a good plotline. The "Summers was simplistic, but the truth is complex" plotline is just too handy. Unfortunately for Scientific American, Summers, despite his reputation as a reductionistic economist, didn't fall for a "simple" explanation.
A few days ago (see a few posts down) I reported on recent education statistics in Britain, which, as usual, purport to show that educational achievement has risen. Not surprisingly, some comments were sceptical about the reality of such claims.
Their scepticism seems to be vindicated by the latest international comparisons from an OECD study, which shows Britain falling from 7th to 17th place (out of 50 or so 'advanced' countries) between 2001 and 2007.
But before our American readers start gloating, they should note that the US is only 21st in the same table. Indeed, on searching Google News for "OECD education", it seems that just about every country in the study finds something to moan about - even the Canadians, who are very close to the top.
And who is actually at the top? Sorry, Razib, but it's Finland.
Added on 6 December: It has rightly been pointed out that the UK ranking of 17th relates to Reading, while the US ranking of 21st relates to Science, so they are not comparable. Mea culpa. The correct figures are: UK: Reading 17th, Science 14th, Mathematics 24th. USA: Science 29th, Mathematics 35th. There does not seem to be a Reading ranking for the USA. I think I read somewhere that there was some error in the American test papers which invalidated the results for Reading.
Tuesday, December 04, 2007
The videos are up.
Monday, December 03, 2007
Jonathan Eisen points out that the Personal Genome Project is looking for volunteers. If you read this website, you're probably well-enough informed about genetics to know the risks and benefits of having your genome sequence publicly available (ie. both are pretty minimal at this point).
Apparently the initial goal is to sequence the "exome" (ie. the 2-3% of the genome that actually encodes proteins) of all the individuals, and make it public along with phenotypic information. This is quite the opposite of the approach usually taken in genetics, where the privacy of subjects is to be protected at all costs. It's also precisely how genetics needs to be done in the future.