Split brains, autism and schizophrenia

7 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A new study suggests that a gene known to be causally linked to schizophrenia and other psychiatric disorders is involved in the formation of connections between the two hemispheres of the brain. DISC1 is probably the most famous gene in psychiatric genetics, and rightly so. It was discovered in a large Scottish pedigree, where 18 members were affected by psychiatric disease.
The diagnoses ranged from schizophrenia and bipolar disorder to depression and a range of “minor” psychiatric conditions. It was found that the affected individuals had all inherited a genetic anomaly – a translocation of genetic material between two chromosomes. This basically involves sections of two chromosomes swapping with each other. In the process, each chromosome is broken, before being spliced back to part of the other chromosome. In this case, the breakpoint on chromosome 1 interrupted a gene, subsequently named Disrupted-in-Schizophrenia-1, or DISC1.

That this discovery was made using classical “cytogenetic” techniques (physically looking at the chromosomes down a microscope) and in a single family is somehow pleasing in an age where massive molecular population-based studies are in vogue. (A win for “small” science).

The discovery of the DISC1 translocation clearly showed that disruption of a single gene could lead to psychiatric disorders like schizophrenia. This was a challenge to the idea that these disorders were “polygenic” – caused by the inheritance in each individual of a large number of genetic variants. As more and more mutations in other genes are being found to cause these disorders, the DISC1 situation can no longer be dismissed as an exception – it is the norm.

It also was the first example of a principle that has since been observed for many other genes – namely that the effects of the mutation can manifest quite variably – not as one specific disorder, but as different ones in different people. Indeed, DISC1 has since been implicated in autism as well as adult-onset disorders. It is now clear from this and other evidence that these apparently distinct conditions are best thought of as variable outcomes that arise, in many cases at least, from disturbances of neurodevelopment.

Since the initial discovery, major research efforts of a growing number of labs have been focused on the next obvious questions: what does DISC1 do? And what happens when it is mutated? What happens in the brain that can explain why psychiatric symptoms result?

We now know that DISC1 has many different functions. It is a cytoplasmic protein – localised inside the cell – that interacts with a very large number of other proteins and takes part in diverse cellular functions, including cell migration, outgrowth of nerve fibres, the formation of dendritic spines (sites of synaptic contact between neurons), neuronal proliferation and regulation of biochemical pathways involved in synaptic plasticity. Many of the proteins that DISC1 interacts with have also been implicated in psychiatric disease.

This new study adds another possible function, and a dramatic and unexpected one at that. This function was discovered from an independent angle, by researchers studying how the two hemispheres of the brain get connected – or more specifically, why they sometimes fail to be connected. The cerebral hemispheres are normally connected by millions of axons which cross the midline of the brain in a structure called the corpus callosum (or “tough body” – (don’t ask)). Very infrequently, people are born without this structure – the callosal axons fail to cross the midline and the two hemispheres are left without this major route of communication (though there are other routes, such as the anterior commissure).

The frequency of agenesis of the corpus callosum has been estimated at between 1 in 1,000 and 1 in 6,000 live births – thankfully very rare. It is associated with a highly variable spectrum of other symptoms, including developmental delay, autistic symptoms, cognitive disabilities extending into the range of mental retardation, seizures and other neurological signs.

Elliott Sherr and colleagues were studying patients with this condition, which is very obvious on magnetic resonance imaging scans (see Figure). They initially found a mother and two children with callosal agenesis who all carried a deletion on chromosome 1, at position 1q42 – exactly where DISC1 is located. They subsequently identified another patient with a similar deletion, which allowed them to narrow down the region and identify DISC1 as a plausible candidate (among some other genes in the deleted region). Because the functions of proteins can be affected not just by large deletions or translocations but also by less obvious mutations that change a single base of DNA, they also sequenced the DISC1 gene in a cohort of callosal agenesis patients and found a number carrying novel mutations that are very likely to disrupt the function of the gene.

While not rock-solid evidence that it is DISC1 that is responsible, these data certainly point to it as the strongest candidate to explain the callosal defect. This hypothesis is strongly supported by findings from DISC1 mutant mice (carrying a mutation that mimics the effect of the human translocation), which also show defects in formation of the corpus callosum. In addition, the protein is strongly expressed in the axons that make up this structure at the time of its development.

The most obvious test of whether disruption of DISC1 really causes callosal agenesis is to look in the people carrying the initial translocation. Remarkably, it is not known whether the original patients in the Scottish pedigree who carry the DISC1 translocation show this same obvious brain structural phenotype. They have, very surprisingly, never been scanned.

This new paper raises the obvious hypothesis that the failure to connect the two hemispheres results in the psychiatric or cognitive symptoms, which variously include reduced intellectual ability, autism and schizophrenia. This seems like too simplistic an interpretation, however. All we have now is a correlation. First, the implication of DISC1 in the acallosal phenotype is not yet definitive – this must be nailed down and replicated. But even if it is shown that disruption of DISC1 causes both callosal agenesis and schizophrenia (or other psychiatric disorders or symptoms), this does not prove a causal link. DISC1 has many other functions and is expressed in many different brain areas (ubiquitously in fact). Any, or indeed, all of these functions may in fact be the cause of psychopathology.

One prediction, if it were true that the lack of connections between the two hemispheres is causal, is that we would expect the majority of patients with callosal agenesis to have these kinds of psychiatric symptoms. In fact, the rates are indeed very high – in different studies it has been estimated that up to 40% of callosal agenesis patients have an autism diagnosis, while about 8% have the symptoms of schizophrenia or bipolar disorder. (Of course, these patients may have other, less obvious brain defects as well, so even this is not definitive).

Conversely, we might naively expect a high rate of callosal agenesis in patients with autism or schizophrenia. However, we know these disorders are extremely heterogeneous and so it is much more likely that this phenotype might be apparent in only a specific (possibly very small) subset of patients. This may indeed be the case – callosal agenesis has been observed in about 3 out of 200 schizophrenia patients (a vastly higher rate than in the general population). Another study, just published, has found that mutations in a different gene – ARID1B – are also associated with callosal agenesis, mental retardation and autism. More generally, there may be subtle reductions in callosal connectivity in many schizophrenia or autism patients (including some autistic savants).

Whether this defect can explain particular symptoms is not yet clear. For the moment, the new study provides yet another possible function of DISC1, and highlights an anatomical phenotype that is apparently present in a subset of autism and schizophrenia cases and that can arise due to mutation in many different genes (of which DISC1 and ARID1B are only two of many known examples).

One final note: formation of the corpus callosum is a dramatic example of a process that is susceptible to developmental variation. What I mean is this: when patients inherit a mutation that results in callosal agenesis, this phenotype occurs in some patients but not all. This is true even in genetically identical people, like monozygotic twins or triplets (or in lines of genetically identical mice). Though the corpus callosum contains millions of nerve fibres, the initial events that establish it involve very small numbers of cells. These cells, which are located at the medial edge of each cerebral hemisphere, must contact each other to enable the fusion of the two hemispheres, forming a tiny bridge through which the first callosal fibres can cross. Once these are across, the rest seem able to follow easily. Because this event involves very few cells at a specific time in development, it is susceptible to random “noise” – fluctuations in the precise amounts of various proteins in the cells, for example. These are not caused by external forces – the noise is inherent in the system. The result is that, in some people carrying such a mutation the corpus callosum will not form at all, while in others it forms apparently completely normally (see figure of triplets, one on left with normal corpus callosum, the other two with it absent). So, an all-or-none effect can arise, without any external factors involved.

This same kind of intrinsic developmental variation may also explain or at least contribute to the variability in phenotypic outcome at the level of psychiatric symptoms when these kinds of neurodevelopmental mutations are inherited. Even monozygotic twins are often discordant for psychiatric diagnoses (concordance for schizophrenia is about 50%, for example). This is often assumed to be due to non-genetic and therefore “environmental” or experiential factors. If these disorders really arise from differences in brain wiring, which we know are susceptible to developmental variation, then differences in the eventual phenotype could actually be completely intrinsic and innate.

Osbun N, Li J, O’Driscoll MC, Strominger Z, Wakahiro M, Rider E, Bukshpun P, Boland E, Spurrell CH, Schackwitz W, Pennacchio LA, Dobyns WB, Black GC, & Sherr EH (2011). Genetic and functional analyses identify DISC1 as a novel callosal agenesis candidate gene. American journal of medical genetics. Part A, 155 (8), 1865-76 PMID: 21739582

Halgren C, Kjaergaard S, Bak M, Hansen C, El-Schich Z, Anderson CM, Henriksen KF, Hjalgrim H, Kirchhoff M, Bijlsma EK, Nielsen M, den Hollander NS, Ruivenkamp CA, Isidor B, Le Caignec C, Zannolli R, Mucciolo M, Renieri A, Mari F, Anderlid BM, Andrieux J, Dieux A, Tommerup N, & Bache I (2011). Corpus Callosum Abnormalities, Mental Retardation, Speech Impairment, and Autism in Patients with Haploinsufficiency of ARID1B. Clinical genetics PMID: 21801163

Welcome to your genome

11 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

There is a common view that the human genome has two different parts – a “constant” part and a “variable” part. According to this view, the bases of DNA in the constant part are the same across all individuals. They are said to be “fixed” in the population. They are what make us all human – they differentiate us from other species. The variable part, in contrast, is made of positions in the DNA sequence that are “polymorphic” – they come in two or more different versions. Some people carry one base at that position and others carry another. The idea is that it is the particular set of such variations that we inherit that makes us each unique (unless we have an identical twin). According to this idea, we each have a hand dealt from the same deck.

The genome sequence (a simple linear code made up of 3 billion bases of DNA in precise order, chopped up onto different chromosomes) is peppered with these polymorphic positions – about 1 in every 1,250 bases. That makes about 2,400,000 polymorphisms in each genome (and we each carry two copies of the genome). That certainly seems like plenty of raw material, with limitless combinations that could explain the richness of human diversity. This interpretation has fuelled massive scientific projects to try and find which common polymorphisms affect which traits. (Not to mention personal genomics companies who will try to tell you your risk of various diseases based on your profile of such polymorphisms).

The problem with this view is that it is wrong. Or at least woefully incomplete.

The reason is it ignores another source of variation: very rare mutations in those bases that are constant across the vast majority of individuals. There is now very good evidence that it is those kinds of mutations that contribute most to our individuality. Certainly, they are much more likely to affect a protein’s function and much more likely to contribute to genetic disease. We each carry hundreds of such rare mutations that can affect protein function or expression and are much more likely to have a phenotypic impact than common polymorphisms.

Indeed, far from most of the genome being effectively constant, it can be estimated that every position in the genome has been mutated many, many times over in the human population. And each of us carries hundreds of new mutations that arose during generation of the sperm and egg cells that fused to form us. New mutations may spread in the pedigree or population in which they arise for some time, depending in part on whether they have a deleterious effect or not. Ones that do will likely be quickly selected against.

A new paper from the 1000 genomes project consortium shows that:

“the vast majority of human variable sites are rare and that the majority of rare variants exhibit, at most, very little sharing among continental populations”.

This is a much more fluid picture of genetic variation than we are used to. We are not all dealt a genetic hand from the same deck – each population, sub-population, kindred, nuclear family has a distinct set of rare genetic variants. And each of these decks contains a lot of jokers – the new mutations that arise each time a hand is dealt.

Why have such rare mutations generally been ignored while the polymorphic sites have been the focus of intense research? There are several reasons, some practical and some theoretical. Practically, it has until recently been almost impossible to systematically find very rare mutations. To do so requires that we sequence the whole genome, which has only recently become feasible. In contrast, methods to survey which bases you carry at all the polymorphic sites across the genome were developed quite some time ago now and are relatively cheap to use. (They rely on sampling about 500,000 such sites around the genome – because of unevenness in the way different bits of chromosomes get swapped when sperm and eggs are made, this sample actually tells you about most of the variable sites across the whole genome). So, there has been a tendency to argue that polymorphic sites will be major contributors to human phenotypes (especially diseases) because those have been the only ones we have been able to look at.

Unfortunately, the results of genome-wide association studies, which aim to identify common variants associated with traits or diseases, have been disappointing. This is especially true for disorders with large effects on fitness, such as schizophrenia or autism. Some variants have been found but their effects, even in combination are very small. Most of the heritability of most of the traits or diseases examined to date remains unexplained. (There are some important exceptions, especially for diseases that strike only late in life and for things like drug responses, where selective pressures to weed out deleterious alleles are not at play).

In contrast, many more rare mutations causing disease are being discovered all the time, and the pace of such discoveries is likely to increase with technological advances. The main message that emerges from these studies has been called by Mary-Claire King the “Anna Karenina principle”, based on Tolstoy’s famous opening line:

“Happy families are all alike; every unhappy family is unhappy in its own way”

But can such rare variants really explain the “missing heritability” of these disorders? Some people have argued that they cannot, but this seems to me to be based on a pervasive misconception of how the heritability of a trait is measured and what it means. According to this misconception, if a trait is heritable across the population, that heritability cannot be accounted for by rare variants. After all, if a mutation only occurs in one or a few individuals, it could only minimally (nearly negligibly) contribute to heritability across the whole population. That is true. However, heritability is not measured across the population – it is measured in families and then averaged across the population.

In humans, it is usually derived by comparing phenotypes between people of different genetic relatedness (identical versus fraternal twins, siblings, parents, cousins, etc.). The values of these comparisons are then averaged across large numbers of pairs to allow estimates of how much genetic variance affects phenotypic variance – the population heritability. While a specific rare mutation may only affect the phenotype within a single family, such mutations could, collectively, explain all of the heritability. Completely different sets of mutations could be affecting the trait or causing the disease in different families.

The next few years will reveal the true impact of rare mutations. We should certainly expect complex genetic interactions and some real effects of common polymorphisms. But the idea that our traits are determined simply by the combination of variants we inherit from a static pool in the population is no longer tenable. We are each far more unique than that.

(And if your personal genomics company isn’t offering to sequence your whole genome, it’s not personal enough).

Gravel S, Henn BM, Gutenkunst RN, Indap AR, Marth GT, Clark AG, Yu F, Gibbs RA, The 1000 Genomes Project, & Bustamante CD (2011). Demographic history and rare allele sharing among human populations. Proceedings of the National Academy of Sciences of the United States of America, 108 (29), 11983-11988 PMID: 21730125

Walsh CA, & Engle EC (2010). Allelic diversity in human developmental neurogenetics: insights into biology and disease. Neuron, 68 (2), 245-53 PMID: 20955932

McClellan, J., & King, M. (2010). Genetic Heterogeneity in Human Disease Cell, 141 (2), 210-217 DOI: 10.1016/j.cell.2010.03.032

Mirrored from Wiring the Brain

Hallucinating neural networks

5 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Hearing voices is a hallmark of schizophrenia and other psychotic disorders, occurring in 60-80% of cases. These voices are typically identified as belonging to other people and may be voicing the person’s thoughts, commenting on their actions or ideas, arguing with each other or telling the person to do something. Importantly, these auditory hallucinations are as subjectively real as any external voices. They may in many cases be critical or abusive and are often highly distressing to the sufferer.

However, many perfectly healthy people also regularly hear voices – as many as 1 in 25 according to some studies, and in most cases these experiences are perfectly benign. In fact, we all hear voices “belonging to other people” when we dream – we can converse with these voices, waiting for their responses as if they were derived from external agents. Of course, these percepts are actually generated by the activity of our own brain, but how?

There is good evidence from neuroimaging studies that the same areas that respond to external speech are active when people are having these kinds of auditory hallucinations. In fact, inhibiting such areas using transcranial magnetic stimulation may reduce the occurrence or intensity of heard voices. But why would the networks that normally process speech suddenly start generating outputs by themselves? Why would these outputs be organised in a way that fits speech patterns, as opposed to random noise? And, most importantly, why does this tend to occur in people with schizophrenia? What is it about the pathology of this disorder that makes these circuits malfunction in this specific way?

An interesting approach to try and get answers to these questions has been to model these circuits in artificial neural networks. If you can generate a network that can process speech inputs and find certain conditions under which it begins to spontaneously generate outputs, then you may have an informative model of auditory hallucinations. Using this approach, a couple of studies from several years ago from the group of Ralph Hoffman have found some interesting clues as to what may be going on, at least on an abstract level.

Their approach was to generate an artificial neural network that could process speech inputs. Artificial neural networks are basically sets of mathematical functions modelled in a computer programme. They are designed to simulate the information-processing functions carried out by individual neurons and, more importantly, the computational functions carried out by an interconnected network of such neurons. They are necessarily highly abstract, but they can recapitulate many of the computational functions of biological neural networks. Their strength lies in revealing unexpected emergent properties of such networks.

The particular network in this case consisted of three layers of neurons – an input layer, an output layer, and a “hidden” layer in between – along with connections between these elements (from input to hidden and from hidden to output, but crucially also between neurons within the hidden layer). “Phonetic” inputs were fed into the input layer – these consisted of models of speech sounds constituting grammatical sentences. The job of the output layer was to report what was heard – representing different sounds by patterns of activation of its forty-three neurons. Seems simple, but it’s not. Deciphering speech sounds is actually very difficult as individual phonetic elements can be both ambiguous and variable. Generally, we use our learned knowledge of the regularities of speech and our working memory of what we have just heard to anticipate and interpret the next phonemes we hear – forcing them into recognisable categories. Mimicking this function of our working memory is the job of the hidden layer in the artificial neural network, which is able to represent the prior inputs by the pattern of activity within this layer, providing a context in which to interpret the next inputs.

The important thing about neural networks is they can learn. Like biological networks, this learning is achieved by altering the strengths of connections between pairs of neurons. In response to a set of inputs representing grammatical sentences, the network weights change in such a way that when something similar to a particular phoneme in an appropriate context is heard again, the pattern of activation of neurons representing that phoneme is preferentially activated over other possible combinations.

The network created by these researchers was an able student and readily learned to recognise a variety of words in grammatical contexts. The next thing was to manipulate the parameters of the network in ways that are thought to model what may be happening to biological neuronal networks in schizophrenia.

There are two major hypotheses that were modelled: the first is that networks in schizophrenia are “over-pruned”. This fits with a lot of observations, including neuroimaging data showing reduced connectivity in the brains of people suffering with schizophrenia. It also fits with the age of onset of the florid expression of this disorder, which is usually in the late teens to early twenties. This corresponds to a period of brain maturation characterised by an intense burst of pruning of synapses – the connections between neurons.

In schizophrenia, the network may have fewer synapses to begin with, but not so few that it doesn’t work well. This may however make it vulnerable to this process of maturation, which may reduce its functionality below a critical threshold. Alternatively, the process of synaptic pruning may be overactive in schizophrenia, damaging a previously normal network. (The evidence favours earlier disruptions).

The second model involves differences in the level of dopamine signalling in these circuits. Dopamine is a neuromodulator – it alters how neurons respond to other signals – and is a key component of active perception. It plays a particular role in signalling whether inputs match top-down expectations derived from our learned experience of the world. There is a wealth of evidence implicating dopamine signalling abnormalities in schizophrenia, particularly in active psychosis. Whether these abnormalities are (i) the primary cause of the disease, (ii) a secondary mechanism causing specific symptoms (like psychosis), or (iii) the brain attempting to compensate for other changes is not clear.

Both over-pruning and alterations to dopamine signalling could be modelled in the artificial neural network, with intriguing results. First, a modest amount of pruning, starting with the weakest connections in the network, was found to actually improve the performance of the network in recognising speech sounds. This can be understood as an improvement in the recognition and specificity of the network for sounds which it had previously learned and probably reflects the improvements seen in human language learners, along with the concomitant loss in ability to process or distinguish unfamiliar sounds (like “l” and “r” for Japanese speakers).

However, when the network was pruned beyond a certain level, two interesting things happened. First, its performance got noticeably worse, especially when the phonetic inputs were degraded (i.e., the information was incomplete or ambiguous). This corresponds quite well with another symptom of schizophrenia, especially those who experience auditory hallucinations – sufferers show phonetic processing deficits under challenging conditions, such as a crowded room.

The second effect was even more striking – the network started to hallucinate! It began to produce outputs even in the absence of any inputs (i.e., during “silence”). When not being driven by reliable external sources of information, the network nevertheless settled into a state of activity that represented a word. The reason the output is a word and not just a meaningless pattern of neurons is that the previous learning that the network undergoes means that patterns representing words represent “attractors” – if some random neurons start to fire, the weighted connections representing real words will rapidly come to dominate the overall pattern of activity in the network, resulting in the pattern corresponding to a word.

Modeling alterations in dopamine signalling also produced both a defect in parsing degraded speech inputs and hallucinations. Too much dopamine signalling produced these effects but so did a combination of moderate over-pruning and compensatory reductions in dopamine signalling, highlighting the complex interactions possible.

The conclusion from these simulations is not necessarily that this is exactly how hallucinations emerge. After all, the artificial neural networks are pretty extreme abstractions of real biological networks, which have hundreds of different types of neurons and synaptic connections and which are many orders of magnitude more complex numerically. But these papers do provide aat least a conceptual demonstration of how a circuit designed to process speech sounds can fail in such a specific and apparently bizarre way. They show that auditory hallucinations can be viewed as the outputs of malfunctioning speech-processing circuits.

They also suggest that different types of insult to the system can lead to the same type of malfunction. This is important when considering new genetic data indicating that schizophrenia can be caused by mutations in any of a large number of genes affecting how neural circuits develop. One way that so many different genetic changes could lead to the same effect is if the effect is a natural emergent property of the neural networks involved.

Hoffman, R., & Mcglashan, T. (2001). Book Review: Neural Network Models of Schizophrenia The Neuroscientist, 7 (5), 441-454 DOI: 10.1177/107385840100700513

Hoffman, R., & McGlashan, T. (2006). Using a Speech Perception Neural Network Computer Simulation to Contrast Neuroanatomic versus Neuromodulatory Models of Auditory Hallucinations Pharmacopsychiatry, 39, 54-64 DOI: 10.1055/s-2006-931496

Mirrored from Wiring the Brain

On the (un)importance of kin selection

19 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

While writing a recent short note on Richard Dawkins and kin selection, I looked through my previous posts on the subject, and found what I thought was a blunder in an old post from 2004. To avoid misleading anyone who came across it in a search, I deleted it from the archive. But on further reflection I have concluded that there was no blunder after all…
Read the rest of this entry »

Peer-review: end it, don’t mend it

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

At Genomes Unzipped, Joe Pickrell has an important post up, Why publish science in peer-reviewed journals?:

The recent announcement of a new journal sponsored by the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust generated a bit of discussion about the issues in the scientific publishing process it is designed to address—arbitrary editorial decisions, slow and unhelpful peer review, and so on. Left unanswered, however, is a more fundamental question: why do we publish scientific articles in peer-reviewed journals to begin with? What value does the existence of these journals add? In this post, I will argue that cutting journals out of scientific publishing to a large extent would be unconditionally a good thing, and that the only thing keeping this from happening is the absence of a “killer app”.

It works for physics, computer science, and to a great extent the social sciences. Why not the biosciences?

Environmental influences on autism – splashy headlines from dodgy data

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A couple of recent papers have been making headlines in relation to autism, one claiming that it is caused less by genetics than previously believed and more by the environment and the other specifically claiming that antidepressant use by expectant mothers increases the risk of autism in the child. But are these conclusions really supported by the data? Are they strongly enough supported to warrant being splashed across newspapers worldwide, where most readers will remember only the headline as the take-away message? The legacy of the MMR vaccination hoax shows how difficult it can be to counter overblown claims and the negative consequences that can arise as a result.

So, do these papers really make a strong case for their major conclusions? The first gives results from a study of twins in California. Twin studies are a classic method to determine whether something is caused by genetic or environmental factors. The method asks, if one twin in a pair is affected by some disorder (autism in this case), with what frequency is the other twin also affected? The logic is very simple: if something is caused by environmental factors, particularly those within a family, then it should not matter whether the twins in question are identical or fraternal – their risk should be the same because their exposure is the same. On the other hand, if something is caused by genetic mutations, and if one twin has the disorder, then the rate of occurrence of the disorder in the other twin should be much higher if they are genetically identical than if they only share half their genes, as fraternal twins do.

Working backwards, if the rate of twin concordance for affected status are about the same for identical and fraternal twins, this is strong evidence for environmental factors. If the rate is much higher in monozygotic twins, this is strong evidence for genetic factors. Now to the new study. What they found was that the rate of concordance for monozygotic (identical) twins was indeed much higher than for dizyogotic (fraternal) twins – about twice as high on average.

For males: MZ: 0.58, DZ: 0.21
For females: MZ: 0.60, DZ: 0.27

Those numbers are for the diagnosis of strict autism. The rate of “autism spectrum disorder”, which encompasses a broader range of disability, showed similar results:

Males: MZ: 0.77, DZ: 0.31
Females: MZ: 0.50, DZ: 0.36.

These numbers fit pretty well with a number of other recent twin studies, all of which have concluded that they provide evidence for strong heritability of the disorder – i.e., that whether or not someone develops autism is largely (though not exclusively) down to genetics.

So, why did these authors reach a different conclusion and should their study carry any more weight than others? On the latter point, the study is significantly larger than many that have preceded it. This study looked at 192 twin pairs, each with at least one affected twin. However, some recent studies have been comparable or even larger: Lichtenstein and colleagues looked at 117 twin pairs and Rosenberg and colleagues looked at 277 twin pairs. These studies found eveidence for very high heritability and negligible shared environmental effects.

Another potentially important difference is in how the sample was ascertained. Hallmayer and colleagues claim that their assessment of affected status was more rigorous than for other studies and this may be true. However, it has previously been found that less rigorous assessments correlate extremely well with the more standardised assessments, so this is unlikely to be a major factor. In addition, there is very strong evidence that disorders like autism, ADHD, epilepsy, intellectual disability, tic disorders and others all share common etiology – having a broader diagnosis is therefore probably more appropriate.

In any case, the numbers they came up with for concordance rates were pretty similar across these studies. So, why did they end up with a different conclusion? That’s not a rhetorical question – I actually don’t know the answer and if anyone else does I would love to hear it. Given the data, I don’t know how they conclude that they provide evidence for shared environmental effects.

The methodology involves some statistical modeling that tries to tease out the sources of variance. However, this modeling is based completely on a multifactorial threshold model for the disorder – the idea that autism arises when the collective burden of individually minor genetic or environmental insults passes some putative threshold. Sounds plausible, but there is in fact no evidence – at all – that this model applies to autism. In fact, it seems most likely that autism really is an umbrella term for a collection of distinct genetic disorders caused by mutations in separate genes, but which happen to cause common phenotypes (or symptoms).

If that is the case, then what the twin concordance rates actually measure is the penetrance of such mutations – if one inherits mutation X, how often does that actually lead to autism? For monozygotic twins, let us assume that the affected proband (the first twin diagnosed) has such a mutation. Because they are genetically identical, the other one must too. The chance that the other twin will develop autism thus depends on the penetrance of the mutation – some mutations are more highly penetrant than others, giving a much higher probability of developing a specific phenotype. If we average across all MZ twin pairs we therefore get an average penetrance across all such putative mutations. Now, if such mutations are dominant, as many of the known ones are, then the chance that a dizygotic twin will inherit it is 50%, while the penetrance should remain the same. So, this model would predict that the rate of co-occurrence in DZ twins should be about half that of MZ twins, exactly as observed. (No stats required).

The conclusions from this study that the heritability is only modest and that a larger fraction of variance (55%!) is caused by shared environment thus seem extremely shaky. This is reinforced by the fact that the confidence intervals for these estimates are extremely wide (for the effect of shared environment the 95% confidence interval ranges from 9% to 81%). Certainly not enough to overturn all the other data from other studies.

What about epidemiological studies that have shown statistical evidence of increased risk of autism associated with a variety of other factors, including maternal diabetes, antidepressant use, season and place of brith? All of these factors have been linked with modest increases in the risk of autism. Don’t these prove there are important environmental factors? Well, first, they don’t prove causation, they provide a statistical evidence for an association between the two factors, which is not at all the same thing. Second, the increase in risk is usually on the order of about two-fold. Twice the risk may sound like a lot, but it’s only a 1% increase (from 1 to 2%), compared with some known mutations, which increase risk by 50-fold or more.

The main problem with these kinds of studies (and especially with how they are portrayed in the media) is that they are correlational and so you cannot establish a causal link directly from them. In some cases, two different correlated parameters (like red hair and freckles, for example) may actually be caused by an unmeasured third parameter. For example, in the recently published study, the use of antidepressants of the SSRI (selective serotonin reuptake inhibitor) class in mothers was associated with modestly increased risk of autism in the progeny. This association could be because SSRIs disrupt neural development in the fetus (perfectly plausible) but could alternatively be due to the known genetic link between risk of depression and risk of autism. Rates of depression are known to be higher in relatives of autistic people, so SSRI use could just be a proxy for that condition. The authors claim to have corrected for that by comparing rates of autism in the progeny of depressed mothers who were not prescribed SSRIs versus those who were but one might imagine that the severity of depression would be higher among those prescribed an antidpressant. In addition, the authors are careful to note that their findings were based on a small number of children exposed and that “Further studies are needed to replicate and extend these findings”. As with many such findings, this association may or may not hold up with additional study.

As for season and place of birth, those findings are better replicated and, interestingly, also found for schizophrenia. There is a theory that these effects may relate to maternal vitamin D levels, which can also affect neural development. This also seems plausible enough. However, the problem in really having confidence in these findings and in knowing how to interpret them is that they are population averages with small effect sizes. Overall, it seems quite possible that the environment – especially the prenatal environment – can play a part in the etiology of autism. At the moment, splashy headlines notwithstanding, genetic factors look much more important and genetic studies much more likely to give us the crucial entry points to the underlying biology.

Mirrored from Wiring the Brain.

Hallmayer J, Cleveland S, Torres A, Phillips J, Cohen B, Torigoe T, Miller J, Fedele A, Collins J, Smith K, Lotspeich L, Croen LA, Ozonoff S, Lajonchere C, Grether JK, & Risch N (2011). Genetic Heritability and Shared Environmental Factors Among Twin Pairs With Autism. Archives of general psychiatry PMID: 21727249

Lichtenstein P, Carlström E, Råstam M, Gillberg C, & Anckarsäter H (2010). The genetics of autism spectrum disorders and related neuropsychiatric disorders in childhood. The American journal of psychiatry, 167 (11), 1357-63 PMID: 20686188

Rosenberg, R., Law, J., Yenokyan, G., McGready, J., Kaufmann, W., & Law, P. (2009). Characteristics and Concordance of Autism Spectrum Disorders Among 277 Twin Pairs Archives of Pediatrics and Adolescent Medicine, 163 (10), 907-914 DOI: 10.1001/archpediatrics.2009.98

Croen LA, Grether JK, Yoshida CK, Odouli R, & Hendrick V (2011). Antidepressant Use During Pregnancy and Childhood Autism Spectrum Disorders. Archives of general psychiatry PMID: 21727247

On discovering you’re an android

9 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Deckard: She’s a replicant, isn’t she?
Tyrell: I’m impressed. How many questions does it usually take to spot them?
Deckard: I don’t get it, Tyrell.
Tyrell: How many questions?
Deckard: Twenty, thirty, cross-referenced.
Tyrell: It took more than a hundred for Rachael, didn’t it?
Deckard: [realizing Rachael believes she's human] She doesn’t know.
Tyrell: She’s beginning to suspect, I think.
Deckard: Suspect? How can it not know what it is?

A very discomfiting realisation, discovering you are an android. That all those thoughts and ideas and feelings you seem to be having are just electrical impulses zapping through your circuits. That you are merely a collection of physical parts, whirring away. What if some of them break and you begin to malfunction? What if they wear down with use and someday simply fail? The replicants in BladeRunner rail against their planned obsolescence, believing in the existence of their own selves, even with the knowledge that those selves are merely the products of machinery.

The idea that the self, or the conscious mind, emerges from the workings of the physical structures of the brain – with no need to invoke any supernatural spirit, essence or soul – is so fundamental to modern neuroscience that it almost goes unmentioned. It is the tacitly assumed starting point for discussions between neuroscientists, justified by the fact that all the data in neuroscience are consistent with it being true. Yet it is not an idea that the vast majority of the population is at all comfortable with or remotely convinced by. Its implications are profound and deeply unsettling, prompting us to question every aspect of our most deeply held beliefs and intuitions.

This idea has crept along with little fanfare – it did not emerge all at once like the theory of evolution by natural selection. There was no sudden revolution, no body of evidence proffered in a single moment that overturned the prevailing dogma. While the Creator was toppled with a single, momentous push, the Soul has been slowly chipped away at over a hundred years or more, with most people blissfully unaware of the ongoing assault. But its demolition has been no less complete.

If you are among those who is skeptical of this claim or who feels, as many do, that there must be something more than just the workings of the brain to explain the complexities of the human mind and the qualities of subjective experience (especially your own), then first ask yourself: what kind of evidence would it take to convince you that the function of the brain is sufficient to explain the emergence of the mind?

Imagine you came across a robot that performed all the functions a human can perform – that reported a subjective experience apparently as rich as yours. If you were able to observe that the activity of certain circuits was associated with the robot’s report of subjective experience, if you could drive that experience by activating particular circuits, if you could alter it by modifying the structure or function of different circuits, would there be any doubt that the experience arose from the activity of the circuits? Would there be anything left to explain?

The counter-argument to this thought experiment is that it would never be possible to create a robot that has human-like subjective experience (because robots don’t have souls). Well, all those kinds of experiments have, of course, been done on human beings, tens of thousands of times. Functional magnetic resonance imaging methods let us correlate the activity of particular brain circuits with particular behaviours, perceptions or reports of inward states. Direct activation of different brain areas with electrodes is sufficient to drive diverse subjective states. Lesion studies and pharmacological manipulations have allowed us to map which brain areas and circuits, neurotransmitters and neuromodulators are required for which functions, dissociating different aspects of the mind. Finally, differences in the structure or function of brain circuits account for differences in the spectrum of traits that make each of us who we are as individuals: personality, intelligence, cognitive style, perception, sexual orientation, handedness, empathy, sanity – effectively everything people view as defining characteristics of a person. (Even firm believers in a soul would be reluctant recipients of a brain transplant, knowing full well that their “self” would not survive the procedure).

The findings from all these kinds of approaches lead to the same broad conclusion: the mind arises from the activity of the brain – and nothing else. What neuroscience has done is correlated the activity of certain circuits with certain mental states, shown that this activity is required for these states to arise, shown that differences in these circuits affect the quality of these states and finally demonstrated that driving these circuits from the outside is sufficient to induce these states. That seems like a fairly complete scientific explanation of the phenomenon of mental states. If we had those data for our thought-experiment robot, we would be pretty satisfied that we understood how it worked (and could make useful predictions about how it would behave and what mental states it would report, given enough information of the activity of its circuits).

However, many philosophers (and probably a majority of people) would argue that there is something left to explain. After all, I don’t feel like an android – one made of biological rather than electronic materials, but a machine made solely of physical parts nonetheless. I feel like a person, with a rich mental life. How can the qualities of my subjective experience be produced by the activity of various brain circuits?

Many would claim, in fact, that subjective experience is essentially “ineffable” – it cannot be described in physical terms and cannot thus be said to be physical. It must therefore be non-physical, immaterial or even supernatural. However, the fact that we cannot conceive of how a mental state could arise from a brain state is a statement about our current knowledge and our powers of imagination and comprehension, not about the nature of the brain-mind relationship. As an argument, what we currently can or cannot conceive of has no bearing on the question. The strong intuition that the mind is more than just the activity of the brain is reinforced by an unfortunate linguistic accident – that the word “mind” is grammatically a noun, when really it should be a verb. At least, it does not describe an object or a substance, but a process or a state. It is not made of stuff but of the dynamic relations between bits of stuff.

When people argue that activity of some brain circuit is not identical to a subjective experience or sufficient to explain it, they are missing a crucial point – it is that activity in the context of the activity of the entire rest of the nervous system that generates the quality of the subjective experience at any moment. And those who dismiss this whole approach as scientific reductionism ad absurdum, claiming that the richness of human experience could not be explained merely by the activity of the brain should consider that there is nothing “mere” about it – with hundreds of billions of neurons making trillions of connections, the complexity of the human brain is almost incomprehensible to the human mind. (“If the brain were so simple that we could understand it, then we would be so simple that we couldn’t”).

To be more properly scientific, we should ask: “what evidence would refute the hypothesis that the mind arises solely from the activity of the brain”? Perhaps there is positive evidence available that is inconsistent with this view (as opposed to arguments based merely on our current inability to explain everything about the mind-brain relationship). It is not that easy to imagine what form such positive evidence would take, however – it would require showing that some form of subjective experience either does not require the brain or requires more than just the brain.

With respect to whether subjective experience requires the brain, the idea that the mind is associated with an immaterial essence, spirit or soul has an extension, namely that this soul may somehow outlive the body and be said to be immortal. If there were strong evidence of some form of life after death then this would certainly argue strongly against the sufficiency of neuroscientific materialism. Rather depressingly, no such evidence exists. It would be lovely to think we could live on after our body dies and be reunited with loved ones who have died before us. Unfortunately, wishful thinking does not constitute evidence.

Of course, there is no scientific evidence that there is not life after death, but should we expect neuroscience to have to refute this alternative hypothesis? Actually, the idea that there is something non-physical at our essence is non-refutable – no matter how much evidence we get from neuroscience, it does not prove this hypothesis is wrong. What neuroscience does say is that it is not necessary and has no explanatory power – there is no need of that hypothesis.

Complex interactions among epilepsy genes

No comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

A debate has been raging over the last few years over the nature of the genetic architecture of so-called “complex” disorders. These are disorders – such as schizophrenia, epilepsy, type II diabetes and many others – which are clearly heritable across the population, but which do not show simple patterns of inheritance. A new study looking at the profile of mutations in hundreds of genes in patients with epilepsy dramatically illustrates this complexity. The possible implications are far-reaching, especially for our ability to predict risk based on an individual’s genetic profile, but do these findings apply to all complex disorders?

Complex disorders are so named because, while it is clear that they are highly heritable (risk to an individual increases the more closely related they are to someone who has the disorder), their mode of inheritance is far more difficult to discern. Unlike classical Mendelian disorders (such as cystic fibrosis or Huntington’s disease), these disorders do not show simple patterns of segregation within families that would peg them as recessive or dominant, nor can they be linked to mutations in a single gene. This has led people to propose two very different explanations for how they are inherited.

One theory is that such disorders arise due to unfortunate combinations of large numbers of genetic variants that are common in the population. Individually, such variants would have little effect on the phenotype, but collectively, if they surpass some threshold of burden, they could tip the balance into a pathological state. This has been called the common disease/common variant (CD/CV) model.

The alternative model is that these “disorders” are not really single disorders at all – rather they are umbrella terms for collections of a large number of distinct genetic disorders, which happen to result in a similar set of symptoms. Within any individual or family, the disorder may indeed be caused by a particular mutation. Because many of the disorders in question are very severe, with high mortality and reduced numbers of offspring, these mutations will be rapidly selected against in the population. They will therefore remain very rare and many cases of the disorder may arise from new, or de novo, mutations. This has therefore been called the multiple rare variants (MRV) model.

Lately, a number of mixed models have been proposed by various researchers, including myself. Even classical Mendelian disorders rarely show strictly Mendelian inheritance – instead the effects of the major mutations are invariably affected by modifiers in the genetic background. (These are variants with little effect by themselves but which may have a strong effect in combination with some other mutation). If this sounds like a return to the CD/CV model, there are a couple important distinctions to keep in mind. One is the nature of the mutations involved – the mixed model would still invoke some rare mutation that has a large effect on protein function. It may not always cause the disorder by itself (i.e., not every one who carries it will be affected), but could still be called causative in the sense that if the affected individual did not carry it one would expect they would not suffer from the disorder. The other is the number of mutations or variants involved – under the CD/CV model this could number in the thousands (a polygenic architecture), while under the mixed model one could expect a handful to be meaningfully involved (an oligogenic architecture – see diagram from review in Current Opinion in Neurobiology).

The new study, from the lab of Jeff Noebels, aimed to test these models in the context of epilepsy. Epilepsy is caused by an imbalance in excitation and inhibition within brain circuits. This can arise due to a large number of different factors, including alterations in the structural organisation of the brain, which may be visible on magnetic resonance imaging. Many neurodevelopmental disorders are therefore associated with epilepsy as a symptom (usually one of many). But it can also arise due to more subtle changes, not in the gross structure of the brain or the physical wiring of different circuits, but in the way the electrical activity of individual neurons is controlled.

The electrical properties of any neuron – how excitable it is, how long it remains active, whether it fires a burst of action potentials or single ones, what frequency it fires at and many other important parameters – are determined in large part by the particular ion channel proteins it expresses. These proteins form a pore crossing the membrane of the cell, through which electrically charged ions can pass. Different channels are selective for sodium, potassium or calcium ions and can be activated by different types of stimuli – binding a particular neurotransmitter or a change in the cell’s voltage for example. Many channels are formed from multiple subunits, each of which may be encoded by a different gene. There are hundreds of these genes in several large families, so the resultant complexity is enormous.

Many familial cases of epilepsy have been found to be caused by mutations in ion channel genes. However, most epilepsy patients outside these families do not carry these particular mutations. Therefore, despite these findings and despite the demonstrated high heritability, the particular genetic cause of the vast majority of cases of epilepsy has remained unknown. Large genome-wide association studies have looked for common variants that are associated with risk of epilepsy but have turned up nothing of note. The interpretation has been that common variants do not play a major role in the etiology of idiopathic epilepsy (epilepsy without a known cause).

The rare variants model suggests that many of these cases are caused by single mutations in any of the very large number of ion channel genes. A straightforward experiment to test that would be to sequence all these candidate genes in a large number of epilepsy patients. The hope is that it would be possible to shake out the “low hanging fruit” – obviously pathogenic mutations in some proportion of cases. The difficulty lies in recognising such a mutation as pathogenic when one finds it. This generally relies on some statistical evidence – any individual mutation, or such mutations in general, should be more frequent in epilepsy patients than in unaffected controls. The experiment must therefore involve as large a sample as possible and a control comparison group as well as patients.

Klassen and colleagues sequenced 237 ion channel genes in 152 patients with idiopathic epilepsy and 139 healthy controls. What they found was surprising in several ways. They did find lots of mutations in these genes, but they found them at almost equal frequency in controls as in patients. Even the mutations predicted to have the most severe effects on protein function were not significantly enriched in patients. Indeed, mutations in genes already known to be linked to epilepsy were found in patients and controls alike (though 96% of patients had such a mutation, so did 67% of controls). Either these specific mutations are not pathogenic or their effects can be strongly modified by the genetic background.

More interesting results emerged from looking at the occurrence of multiple mutations in these genes in individuals. 78% of patients vs 30% of controls had two or more mutations in known familial epilepsy genes. A similar trend was observed when looking at specific ion channel gene families, such as GABA receptors or sodium channels.

These data would seem to fit with the idea that an increasing mutational load pushes the system over a threshold into a pathological state. The reality seems more complicated, however, and far more nuanced. Though the average load was lower, many controls had a very high load and yet were quite healthy. It seems that the specific pattern of mutations is far more important than the overall number. This fits very well with the known biology of ion channels and previous work on genetic interactions between mutations in these genes.

Though one might expect a simple relationship between number of mutations and severity of phenotype, that is unlikely to be the case for these genes. It is well known that the effects of a mutation in one ion channel gene can be suppressed by mutation in another gene – restoring the electrical balance in the cell, at least to a degree sufficient for performance under normal conditions. The system is so complex, with so many individual components, that these interactions are extremely difficult to predict. This is complicated further by the fact that there are active processes within the system that act to normalise its function. It has been very well documented, especially by Eve Marder and colleagues, that changes to one ion channel in a neuron can be compensated for by homeostatic mechanisms within the cell that aim to readjust the electrical set-points for optimal physiological function. In fact, these mechanisms do not just happen within one cell, but across the circuit.

The upshot of the study is that, though some of the mutations they discovered are indeed likely to be the pathogenic culprits, it is very difficult to discern which ones they are. It is very clear that there is at least an oligogenic architecture for so-called “channelopathies” – the phenotype is determined by several mutations in each individual. (Note that this is not evidence for a highly polygenic architecture involving hundreds or thousands of genetic variants with tiny individual effects). The important insight is that it is not the overall number or mutational load that matters but the pattern of specific mutations in any individual that is crucial. Unfortunately, given how complicated the system is, this means it is currently not possible to predict an individual’s risk, even with this wealth of data. This will likely require a lot more biological information on the interactions between these mutations from experimental approaches and computational modelling.

What are the implications for other complex disorders? Should we expect a similarly complicated picture for diseases like schizophrenia or autism? Perhaps, though I would argue against over-extrapolating these findings. For the reasons described above, mutations in ion channel genes will show especially complex genetic interactions – it is, for example, even possible for two mutations that are individually pathogenic to suppress each other’s effects in combination. This is far less likely to occur for classes of mutations affecting processes such as neurodevelopment, many of which have been implicated in psychiatric disorders. Though by no means unheard of, it is far less common for the effects of one neurodevelopmental mutation to be suppressed by another – it generally just makes things worse. So, while modifying effects of genetic background will no doubt be important for such mutations, there is some hope that the interactions will be more straightforward to elucidate (mostly enhancing, far fewer suppressing). Others may see it differently of course (and I would be pleased to hear from you if you do); similar sequencing efforts currently underway for these disorders may soon tell whether that prediction is correct.

Klassen T, Davis C, Goldman A, Burgess D, Chen T, Wheeler D, McPherson J, Bourquin T, Lewis L, Villasana D, Morgan M, Muzny D, Gibbs R, & Noebels J (2011). Exome sequencing of ion channel genes reveals complex profiles confounding personal risk assessment in epilepsy. Cell, 145 (7), 1036-48 PMID: 21703448

Kasperaviciute, D., Catarino, C., Heinzen, E., Depondt, C., Cavalleri, G., Caboclo, L., Tate, S., Jamnadas-Khoda, J., Chinthapalli, K., Clayton, L., Shianna, K., Radtke, R., Mikati, M., Gallentine, W., Husain, A., Alhusaini, S., Leppert, D., Middleton, L., Gibson, R., Johnson, M., Matthews, P., Hosford, D., Heuser, K., Amos, L., Ortega, M., Zumsteg, D., Wieser, H., Steinhoff, B., Kramer, G., Hansen, J., Dorn, T., Kantanen, A., Gjerstad, L., Peuralinna, T., Hernandez, D., Eriksson, K., Kalviainen, R., Doherty, C., Wood, N., Pandolfo, M., Duncan, J., Sander, J., Delanty, N., Goldstein, D., & Sisodiya, S. (2010). Common genetic variation and susceptibility to partial epilepsies: a genome-wide association study Brain, 133 (7), 2136-2147 DOI: 10.1093/brain/awq130

Mitchell KJ (2011). The genetics of neurodevelopmental disease. Current opinion in neurobiology, 21 (1), 197-203 PMID: 20832285

Mirrored from http://wiringthebrain.blogspot.com

Synaesthesia and savantism

14 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

“We only use 10% of our brain”. I don’t know where that idea originated but it certainly took off as a popular meme – taxi drivers seem particularly taken with it. It’s rubbish of course – you use more than that just to see. But it captures an idea that we humans have untapped intellectual potential – that in each of us individually, or at least in humans in general lies the potential for genius.

Part of what has fed into that idea is the existence of so-called “savants” – people who have some isolated area of special intellectual ability far beyond most other individuals. Common examples of savant abilities include prodigious mental calculations, calendar calculations and remarkable feats of memory. These can arise due to brain injuries, or be apparently congenital. In congenital cases, savant abilities are often encountered against a background of the general intellectual, social or communicative symptoms of autism. (The portrayal by Dustin Hoffman in Rain Man is a good example, based on the late, well known savant Kim Peek).

A new hypothesis proposes that savantism arises due to a combination of autism and another condition, synaesthesia. Synaesthesia is commonly thought of as a cross-sensory phenomenon, where, for example, different sounds will induce the experience of particular colours, or tastes will induce the tactile experience of a shape. But in most cases the stimuli that induce synaesthesia are not sensory, but conceptual categories of learned objects, such as letters, numbers, days of the week, months of the year. The most common types involve coloured letters or numbers and what are called mental “number forms”.

These go beyond the typical mental number line that most of us can visualise from early textbooks. They are detailed, stable and idiosyncratic forms in space around the person, where each number occupies a specific position. They may follow complicated trajectories through space, even wrapping around the individual’s body in some cases. These forms can be related to different reference points (body, head or gaze-oriented) and can sometimes be mentally manipulated by synaesthetes to examine them more closely at specific positions.

The suggestion in relation to savantism is that such forms enable arithmetical calculations to be carried out in some kind of spatial, intuitive way that is distinct from the normal operations of formal arithmetic – but only when the brain is wired in such a way to take advantage of these special reprepsentations of numbers, as apparently can arise due to autism.

It has been proposed that the intense and narrowly focused interests typical of autism can lead to prolonged practice of these skills, which thus emerge and improve over time. While certainly likely to be involved in the development of these skills, on its own this explanation seems insufficient. It seems more likely that these special abilities arise from more fundamental differences in the way the brains of autistic people process information, with a greater degree of processing of local detail, paralleled by greater local connectivity in neural circuits and reductions in long-range integration.

Local processing may normally be actively inhibited. This idea has been referred to as the tyranny of the frontal lobes (especially of the left hemisphere), which impart top-down expectations with such authority that they override lower areas, conscripting them into service for the greater good. The potential of the local elements to process detailed information is thus superseded in order to achieve optimal global performance. The idea that local processing is actively suppressed is supported by the fact that savant abilities can sometimes emerge after frontal lobe injuries or in cases of frontotemporal dementia. Increased skills in numerical estimation can also, apparently, be induced in healthy people by using transcranial magnetic stimulation to temporarily inactivate part of the left hemisphere.

This kind of focus on local details, combined with an exceptional memory, may explain many types of savant skills, including musical and artistic ones. As many as 10% of autistics show some savant ability. These “islands of genius” (including things like perfect pitch, for example) are typically remarkable only on the background of general impairment – they would be less remarkable in the general population. Really prodigious savants are much more rare – these are people who can do things outside the range of normal abilities, such as phenomenal mathematical calculations. In these cases, the increased local processing typical of autism may not be, by itself, sufficient to explain the supranormal ability.

The idea is that such prodigious calculations may also rely on the concrete visual representations of numbers found in some types of synaesthesia. This theory was originally proposed by Simon Baron-Cohen and colleagues and arose from case studies of individual savants, including Daniel Tammett, an extraordinary man who has both Asperger’s syndrome and synaesthesia.

I had the pleasure of speaking with Daniel recently about his particular talents on the FutureProof radio programme for Dublin’s Newstalk Radio. (The podcast, from Nov 27th, 2010, can be accessed, with some perseverance, here). Daniel is unique in many ways. He has the prodigious mental talents of many savants, for arithmetic calculations and memory, but also has the insight and communicative skills to describe what is going on in his head. It is these descriptions that have fueled the idea that the mental calculations he performs rely on his synaesthetic number forms.

Daniel experiences numbers very differently from most people. He sees numbers in his mind’s eye as occupying specific positions in space. They also have characteristic colours, textures, movement, sounds and, importantly, shapes. Sequences of numbers form “landscapes in his mind”. This is vividly portrayed in the excellent BBC documentary “The Boy With the Incredible Brain” and described by Daniel in his two books, “Born on a Blue Day” and “Embracing the Wide Sky”.

His synaesthetic experiences of numbers are an intrinsic part of his arithmetical abilities. (I say arithmetical, as opposed to mathematical, because his abilities seem to be limited to prodigious mental calculations, as opposed to a talent for advanced calculus or other areas of mathematics). Daniel describes doing these calculations by some kind of mental spatial manipulation of the shapes of numbers and their positions in space. When he is performing these calculations he often seems to be tracing shapes with his fingers. He is, however, hard pressed to define this process exactly – it seems more like his brain does the calculation and he reads off the answer, apparently deducing the value based at least partly on the shape of the resultant number.

Daniel is also the European record holder for rembering the digits of the number pi – to over 20,000 decimal places. This feat also takes advantage of the way that he visualises numbers – he describes moving along a landscape of the digits of pi, which he sees in his mind’s eye and which enables him to recall each digit in sequence. The possible generality of this single case study is bolstered by reports of other savants, who similarly utilise visuospatial forms in their calculations and who report that they simply “see” the correct answer (see review by Murray).

Additional evidence to support the idea comes from studies testing whether the concrete and multimodal representations of numbers or units of time are associated with enhanced cognitive abilities in synaesthetes who are not autistic. Several recent studies suggest this is indeed the case.

Many synaesthetes say that having particular colours or spatial positions for letters and numbers helps them remember names, phone numbers, dates, etc. Ward and colleagues have tested whether these anecdotal reports would translate into better performance on memory tasks and found that they do. Synaesthetes did show better than average memory, but importantly, only for those items which were part of their synaesthetic experience. Their general memory was no better than non-synaesthete controls. Similarly, Simner and colleagues have found that synaesthetes with spatial forms for time units perform better on visuospatial tasks such as mental rotation of 3D objects.

Synaesthesia and autism are believed to occur independently and, as each only occurs in a small percentage of people, the joint occurrence is very rare. Of course, it remains possible that, even though most people with synaesthesia do not have autism and vice versa, their co-occurrence in some cases may reflect a single cause. Further research will be required to determine definitively the possible relationship between these conditions. For now, the research described above, especially the first-person accounts of Daniel Tammett and others, gives a unique insight into the rich variety of human experience, including fundamental differences in perception and cognitive style.

Murray, A. (2010). Can the existence of highly accessible concrete representations explain savant skills? Some insights from synaesthesia Medical Hypotheses, 74 (6), 1006-1012 DOI: 10.1016/j.mehy.2010.01.014

Bor, D., Billington, J., & Baron-Cohen, S. (2008). Savant Memory for Digits in a Case of Synaesthesia and Asperger Syndrome is Related to Hyperactivity in the Lateral Prefrontal Cortex Neurocase, 13 (5), 311-319 DOI: 10.1080/13554790701844945

Simner, J., Mayo, N., & Spiller, M. (2009). A foundation for savantism? Visuo-spatial synaesthetes present with cognitive benefits Cortex, 45 (10), 1246-1260 DOI: 10.1016/j.cortex.2009.07.007

Yaro, C., & Ward, J. (2007). Searching for Shereshevskii: What is superior about the memory of synaesthetes? The Quarterly Journal of Experimental Psychology, 60 (5), 681-695 DOI: 10.1080/17470210600785208

Where do morals come from?

9 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Review of “Braintrust. What Neuroscience Tells Us about Morality”, by Patricia S. Churchland

The question of “where morals come from” has exercised philosophers, theologians and many others for millennia. It has lately, like many other questions previously addressed only through armchair rumination, become addressable empirically, through the combined approaches of modern neuroscience, genetics, psychology, anthropology and many other disciplines. From these approaches a naturalistic framework is emerging to explain the biological origins of moral behaviour. From this perspective, morality is neither objective nor transcendent – it is the pragmatic and culture-dependent expression of a set of neural systems that have evolved to allow our navigation of complex human social systems.

 

Read the rest of this entry »

BBC Series

3 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

The BBC have just finished a short (3-part) series of documentaries by Adam Curtis, under the general heading ‘All Watched Over by Machines of Loving Grace’. It’s impossible to describe them briefly, so I won’t try; suffice to say I found them fascinating but often exasperating with their wild leaps of logic. For GNXP readers the most interesting will probably be the last, which has a lot of material about W. D. Hamilton, George Price, and Dianne Fossey, as well as extraordinary archive footage from Central Africa. (I bet you never heard a BBC reporter casually referring to ‘jungle bunnies’ before.)

They are all available here for the next week, at least. Unfortunately I don’t know if this will be accessible outside the UK – some things are and some aren’t, usually for copyright reasons.

[Added: if you can't view it on the BBC iPlayer, the final part is currently available on YouTube - search YouTube for recent postings on 'Adam Curtis'.]

Natural selection and the collapse of economic growth

7 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

**This is a cross-post from my blog Evolving Economics

In my last post, I discussed Oded Galor and Omer Moav’s paper Natural Selection and the Origin of Economic Growth. As I noted then, my PhD supervisors, Juerg Weber and Boris Baer, and I have written a discussion paper that describes a simulation of the model.

In the discussion paper we consider the entry of people into the population that have a low preference for child quality – i.e. they weight child quantity more highly. Entry could be through migration or mutation. We show that if people with a low enough preference for quality enter the population, their higher fitness in the modern growth state can drive the economy back into Malthusian conditions.

Read the rest of this entry »

Natural selection and economic growth

6 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

**This is a cross-post from my blog Evolving Economics

As I have focussed my PhD research on the link between evolution and long-term economic growth, for months I have meant to blog on the core paper in this area, Natural Selection and the Origin of Economic Growth by Oded Galor and Omer Moav. I have held off writing this post pending finalisation some of my own related work, which I have now done.

This paper is somewhat of an outlier as I’m not aware of any other paper that models the Industrial Revolution as a result of natural selection (apart from a soon to be published paper by Galor and Michalopoulos). There is another paper by Zak and Park that examines population genetics and economic growth (a topic for another blog post) but they do not directly tackle the Industrial Revolution. In A Farewell to Alms, Greg Clark notes that Galor and Moav’s paper reignited his interest in this topic.

Read the rest of this entry »

Somatic mutations make twins’ brain less similar

5 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

There is a paradox at the heart of behavioural and psychiatric genetics. On the one hand, it is very clear that practically any psychological trait one cares to study is partly heritable – i.e., the differences in the trait between people are partly caused by differences in their genes. Similarly, psychiatric disorders are also highly heritable and, by now, mutations in hundreds of different genes have been identified that cause them.

However, these studies also highlight the limits of genetic determinism, which is especially evident in comparisons of monozygotic (identical) twins, who share all their genetic inheritance in common. Though they are obviously much more like each other in psychological traits than people who are not related to each other, they are clearly NOT identical to each other for these traits. For example, if one twin has a diagnosis of schizophrenia, the chance that the other one will also suffer from the disorder is about 50% – massively higher than the population prevalence of the disorder (around 1%), but also clearly much less than 100%.

Read the rest of this entry »

The miswired brain

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Recent evidence indicates that psychiatric disorders can arise from differences, literally, in how the brain is wired during development. Psychiatric genetic approaches are finding new mutations associated with mental illness at an amazing rate, thanks to new genomic array and sequencing technologies. These mutations include so-called copy number variants (deletions or duplications of sections of a chromosome) or point mutations (a change in the code at one position of the DNA sequence). At the recent Wiring the Brain conference, we heard from Christopher Walsh, Guy Rouleau, Michael Gill and others of the identification of a number of new genes associated with neurological disorders, epilepsy, autism and schizophrenia.

The emerging picture is that each of these disorders can be caused by mutations in any one of a large number of genes. Strikingly, many of these genes play important roles in neural development, with mutations affecting patterns of cell migration, the guidance of growing nerve fibres and their connectivity to other cells. Even more remarkable has been the observation that most such mutations predispose to not just one specific illness (such as schizophrenia) but to mental illness in general, with a strong overlap in the genetics of schizophrenia, autism, bipolar disorder, epilepsy, mental retardation, attention-deficit hyperactivity disorder and other diagnostic categories. These different categories may thus represent arguably distinct endpoints arising from common origins in neurodevelopmental insults.

Read the rest of this entry »

Caplan’s Selfish Reasons to Have More Kids

4 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Bryan Caplan has a simple recommendation. Have more kids. If you have one, have another. If you have two, consider three or four. As Caplan spells out in his book, Selfish Reasons to Have More Kids, children have higher private benefits than most people think. Research shows that parents can take it easy, as there is not much they can change about their children. He also argues that there are social benefits to a higher population, with more people leading to more ideas, which are the foundation of modern economic growth.

Despite being someone who is about to face the number of children question, I am not sure that I am the target audience for Caplan’s book. I don’t mean that Caplan wouldn’t recommend to me that I have more children. Rather, as someone who has thought a lot about evolution and economics and having read many of the giants on whose shoulders Caplan stands (particularly Judith Rich Harris and Julian Simon), I didn’t learn a lot from the book. As Caplan ran through the examples of twin studies showing all the different facets of a child’s personality or life outcomes that a parent has no influence over, I found myself wanting more meat and analysis. I felt similarly about his arguments for a larger population.

Having said that, and recognising that I am not the target audience, most readers would probably learn a lot. Caplan provides a fun, easy to read book that gives a great, swift overview of his case. This is the book I’ll be giving to parents, grandparents and friends who have heard me go on about twin studies and genetics. I particularly like it that Caplan gives some practicality to the swathes of findings about trait heritability.

I felt that the largest shortcoming of the book was that it does not address the third factor affecting outcomes for the child – non-shared environment. While heritability explains some of the variation in a child’s traits and outcomes, and nurture generally explains close to nothing, Caplan does not explore the research into non-shared environment. Instead, he puts the variation down to free will:

So far, researchers have failed to explain why identical twins – not to mention ordinary siblings – are so different. Discrediting popular explanations is easy, but finding credible alternatives is not. Personally, I doubt that scientists will ever account for my sons’ differences, because I think their primary source is free will. Despite genes, despite family, despite everything, human beings always have choices – and when we can make different choices, we often do.

Caplan states that several of his friends call his belief in free will his “most absurd belief”. While I don’t know all of Caplan’s beliefs, for the moment I will agree with his friends. In Judith Rich Harris’s The Nurture Assumption, she explored what this non-shared environment might be. In her case, she argued for the effect of peers. What bothered me most with Caplan’s take on free will was not that he did not agree with Harris’s suggestion, but rather, his “it’s all too hard” approach. Unlike Caplan, I expect that over the next few years we will add even further to the explanations for how non-shared environment influences children.

When Caplan came to addressing potential reasons why family size has decreased over the last 60 years, I wanted to hear his arguments in more depth. Take Caplan’s take on Gary Becker’s argument that as women now earn more, they have to give up more income to have kids:

This explanation sounds good, but it’s not as smart as it seems. Women lose more income when they take time off, but they also have a lot more income to lose. They could have worked less, earned more, and had more kids. Since men’s wages rose, too, staying home with the kids is actually more affordable for married moms than ever. If that’s too retro, women could have responded to rising wages by working more, having more kids, and using their extra riches to hire extra help.

It sounds neat, but Caplan assumes that the income effect, which would tend to increase the number of children, dominates the substitution effect, which would tend to decrease the number. It is perfectly plausible for the substitution effect to dominate and women to decide to have fewer children, but Caplan does not address this. He might be right, but as there is no depth to his discussion, it is hard to judge the strength of his argument.

Caplan does point out that in the United States, fertility bottomed out in the 1970s. This occurred despite further increases in income and Caplan uses this as evidence against any income based hypothesis. But the people having children in the 1970s are different to the people having children now. For those women who chose to have no children in the 1970s and possibly responded most strongly to the income effect, they did not contribute to the gene pool and any heritable predisposition has disappeared with them. It is the children of larger families that are having children today. Second, the net fertility rate in the United States is substantially affected by recent immigrants.

Caplan’s preferred view on the decline in fertility is that we have gained a small amount of foresight, allowing us to see the negative effects of early childhood, but not gained enough foresight to note the benefits of children when they are older. There might be some truth to this, but I expect that the other factors that Caplan dismisses are also relevant.

One point where I disagree with Caplan is around his statement that men and women see eye to eye on the number of children they wish to have. Caplan considers that this puts to bed any arguments around women having increased bargaining power. While Caplan’s statistic is true in the most basic sense, the number of children that a man or woman want are a function of a number of things. The main one of these is who the other parent will be. If a woman is paired with the man of her dreams she is likely to want more children than if she is married to a guy who showed promise but has gone nowhere. While Caplan notes that condoms have been widely available since the end of World War II, the pill gave women extra power to decide who exactly the parent is. There is some interesting scope for sexual conflict here.

When it comes to policy prescriptions arising from his position, Caplan explicitly opposes natalist policies to increase birth rates. Caplan states:

After natalists finish lamenting low birthrates, they usually get on a soapbox and demand that the government “do something about it.” There are two big reasons why I refuse to join their chorus. First, while I agree that more kids make the world a better place, I oppose social engineering – especially for such a personal decision. When people are deciding how many children to have, government ought to mind its own business.

Instead, Caplan suggests that grandparents replicate the natalist incentives privately. Given this, it is interesting that Caplan drifts into supporting natalist tax credits in his recent Cato Unbound essay (as I have commented on here). I prefer his arguments for the use of private incentives from his book than his more recent encouragement of government action.

*This is a cross-post from my blog Evolving Economics.

The Fertility J-Curve

11 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Via the Demography Matters blog, Russian birthrate seems to have recovered:

By 2009, the official TFR had risen to 1.537, 1.417 in urban areas and 1.900 in rural areas. Both urban and rural TFRs rose by about the same amount from 2000 to 2009, about 0.330. Vital statistics for 2010 were just released by the national statistics office, GOSKOMSTAT, also known as ROSTAT. The birth rate continues to rise but not as sharply in the past two years as it did in 2007 and 2008. One must wonder if the slower increase in the past two years suggests the birth rate revival may be running out of steam or that it may be due to the global recession. But natural decrease is now but one-fourth of what is was in 2000 and that is a truly dramatic turnaround. The TFR can be estimated at about 1.56 for 2010 although we must wait for the official TFR when it is released later this year. Births for January 2011 have also been released and those are down slightly from January 2010, 131,454 from 132,371. One month hardly defines a trend but I thought I’d pass that along.

This is still below replacement, but is substantially higher than the estimates from 2000, when the birth rate per woman bottomed out to roughly 1.2. At the time, everyone was extrapolating a near-certain birth spiral.

This brings to mind an article from Nature from a couple of years ago that argued that fertility follows a “J” curve with respect to human development. The graph plots fertility against human development (HDI) by country in two time periods:

That is, rather than fertility declining irreversibly with higher levels of development (which is what one might have thought in 1975, or in Russia through the 1990s); it appears that fertility seems to recover a bit at the highest levels of development. This doesn’t apply to all countries — Japan and Italy may have been left behind — but partially explains the relatively high fertility rate of, say, native-born Americans. Explaining the drop in fertility with rising development is easy; explaining the subsequent rise is a little tougher. I see two basic options:

1) It’s important that the measure here is HDI, as opposed to GDP/capital. What’s crucial is the level of female empowerment. Where women have the option to work and raise children, they frequently do so. Where they cannot as easily (Germany for instance, where a substantial cohort of women remain childless and attached to the workforce), women are simply forced to choose. It’s no coincidence that countries like Japan or Italy see plummeting fertility even at high levels of income.

2) This represents the optimal parenting strategy across income ranges. At Malthusian levels of income, additional income is spent on more children. As incomes rise, families start to face a “quantity/quality” tradeoff that leads to them invest more in fewer children. At yet higher levels of income, families are able to invest fully in multiple children.

It’ll be worth seeing whether some of the low-fertility countries out there today — particularly in Southern/Eastern Europe and Eastern Asia — recover. At some point, many countries will also start maxing out their HDI, and we’ll need another indicator. Perhaps people are reading Selfish Reasons to Have Children.

George Price, Group Selection, and Altruism

1 Comment
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

This concludes a series of posts on the work of George Price. For the most recent one, with links to the others, see here*. This final post covers the subject of group selection.

Price and Group Selection

The application of Price’s Equation to group selection, and the related problem of biological altruism, is largely responsible for the current interest in Price, as shown in Oren Harman’s biography. The controversy over group selection dates from the early 1960s, as discussed here*. Price attempted to cut through the controversy with a simple new approach. Using Price’s Equation, the overall change in frequency of a gene in a population between two generations can be broken down into two components, which I call the Covariance and Transmission terms. Price’s simple proposal was to identify the effect of group selection with the Covariance term, while selection on individuals (or genes within individuals) is covered by the Transmission term [Price, 1972, 488]. Price’s own work was cut short by his untimely death, but his approach received a boost when it was endorsed (with some qualifications) by W. D. Hamilton [Narrow Roads, vol.1, 333]. Yet it failed to attract much interest for another decade, and is still not generally accepted.
Read the rest of this entry »

The heritability debate, again

6 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Like the level of selection debate, the debate about what heritability means has a life of its own. The latest shot comes from Scott Barry Kaufman who argues (among other things) that:

The heritability of a trait can vary from 0.00 to 1.00, depending on the environments from which research participants are sampled. Because we know that genes play some role in the development of any trait, the precise heritability estimate doesn’t matter in a practical sense.

Heritability depends on the amount of variability in the environmental factors that contribute to a trait. The problem is that our understanding of the factors that contribute to the development of human traits in general — and to IQ in particular — is currently so deficient that we typically do not know if the environmental factors important in the development of a particular trait are stable across testing situations, vary somewhat across those situations, or vary wildly across those situations.

In his conclusion he states:

At the very least, heritability tells us how much of the variation in IQ can be accounted for by variation in genetic factors when development occurs in an exquisitely specific range of environments. However, David S. Moore has argued that even this is not significant when we realize that the magnitude of any heritability statistic reflects the extent of variation in unidentified non-genetic factors that contribute to the development of the trait in question.

(HT: Bryan Caplan)

Through his post, Kaufman constructs a series of paper tigers, tears them down and implies that because the extreme case does not hold, we should be wary of heritability estimates. I did not find much to disagree with in his examples, but the I differed on the conclusions we should draw.

So, where I do not agree – first, the heritability estimate does matter. While I don’t think it is hugely important whether the heritability of IQ in a specific sample is 0.5 or 0.6, it is important whether the measured heritability is 0 or 0.6. As Caplan notes in his post:

My money says, for example, that the average adult IQ heritability estimate published in 2020 will exceed .5.

I think that Caplan is right (although I might have stated some conditions about the relevant sample), and Kaufman’s argument overstates how finely tuned the environment needs to be to get a meaningful heritability estimate. Heritability estimates of a sample of children growing up in extreme poverty might be much lower (or zero) but as is found again and again, once the basic requirements of a child are met, heritability estimates for IQ are consistently above 0.4. We can construct arguments that in each study there are different gene-environment interactions and so on, but if genes weren’t important in variation in IQ and the gene-environment interactions weren’t consistent to some degree, why would such consistent heritability results (and correlation between parent and child IQ) be found?

Further, these results matter. They suggest that poverty is affecting the IQ of some children, and policies could be tailored to cut this disadvantage. For children not subject to deficient environments, the high heritability of IQ should influence policies such as those for education. Children are different and the education system should take this into account.

Implicit in Kaufman’s post was the “its all too complex” argument. Social and biological sciences are complex (which is why I find them interesting). However, if we fully accepted Kaufman’s argument that “our understanding of the factors that contribute to the development of human traits … is currently so deficient that we typically do not know if the environmental factors important in the development of a particular trait are stable across testing situations”, it would put into question most of the data analysis in economics, sociology and biology. Econometrics operates on the idea of all other things being equal.

Fortunately, Kaufman has not taken the Gladwell-esque approach of suggesting that we forget about genetic factors. Kaufman suggests further research into how nature and nurture are intertwined. If it is all too complex, we should start unwinding the complexity. However, I believe that, in the meantime, this complexity does not mean that we should throw out all the results that have previously been obtained.

**This is a cross-post from my blog Evolving Economics.

Income and IQ

10 Comments
Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

As I noted in my recent post on Malcolm Gladwell’s Outliers, Gladwell ignored the possibility that traits with a genetic component, other than IQ, might play a role in determining success. His approach reminded me of a useful paper by Samuel Bowles and Herbert Gintis from 2002 on the inheritance of inequality. Bowles and Gintis sought to explain the observed correlation between parental and child income (a correlation of around 0.4) by examining IQ, other genetic factors, environment, race and schooling.

As an example of the consequences of the transmission of income. Bowles and Gintis cited a paper by Hertz which showed that a son born to someone in the top decile of income had a 22.9 per cent chance of attaining that decile himself, compared to a 1.3 per cent chance for someone born to parents in the bottom decile. Conversely, a child born to parents in the top decile had only a 2.4 per cent chance of finishing in the lowest decile compared to over 31.2 per cent for those born to bottom decile parents.

As Gladwell did, Bowles and Gintis started their examination with IQ. To calculate the inheritance of income through genetically inherited IQ, Bowles and Gintis considered the correlation between parent IQ and income, the heritability of IQ from parent to child and the correlation between IQ and income for the child. Breaking this down, Bowles and Gintis used the following steps and estimates:

1. The correlation between parental income and IQ is 0.266.

2.If the parents’ genotypes are uncorrelated, the genetic correlation between the genotype of the parents and of the child is 0.5. This can be increased with assortive mating (people pairing with people more like themselves) to a maximum of one (clones mating). Bowles and Gintis use 0.6.

3.The heritability of IQ is 0.5.

4. The correlation between child income and IQ is 0.266.

ResearchBlogging.orgMultiplying these four numbers together gives the intergenerational correlation of income due to genetically based transmission of IQ. I think there is a mistake in the calculations used by Bowles and Gintis, as they find an intergenerational correlation of 0.01, where I calculated 0.02. This leads to genetically inherited IQ variation explaining 5.3 per cent of the observed intergenerational correlation in income. Regardless of the error, this is a low proportion of the income heritability. (After I wrote this post I did a google search to find if someone had spotted this error before – and they had – on a earlier Gene Expression post on this same paper.)

I would have used some slightly higher numbers, but pushing the numbers to the edges of feasible estimates, such as increasing the correlation between income and IQ to 0.4, the genetically based correlation between parent and child IQ to 0.8 and the degree of assortive mating so that parent-child genotype correlation is 0.8 only yields an intergenerational correlation of 0.10. Genetically inherited IQ would account for approximately 26 per cent of the observed intergenerational correlation.

Unlike Gladwell, Bowles and Gintis then asked what role other genetic factors may play. By using twin studies, which provide an estimate of the degree of heritability of income (using the difference in correlation between fraternal and identical twins) and the degree of common environments of each type of twin, Bowles and Gintis estimated that genetic factors explain almost a third (0.12) of the 0.4 correlation between parent and child income. Loosening their assumptions on the degree of shared environments by identical twins compared to fraternal twins (i.e. assuming near identical environments for both identical and fraternal twins) can generate a higher estimate of the genetic basis of almost three-quarters of the variability in income.

From this, it seems that genetic inheritance plays an important role income transmission between generations. The obvious question is what these factors might be. I expect that patience or ability to delay gratification must play a role, although I would expect that there would be a broad suite of relevant personality traits. I would also expect that appearance and physical features would be relevant. Bowles and Gintis do not take their analysis to this point.

The authors finished their analysis with some consideration of other factors, and conclude that race, wealth and schooling are more important than IQ as a transmission mechanism of income across generations (although as the authors noted, they may have overestimated the importance of race by not including a measure of cognitive performance in the regression). That conclusion may be fair, but as they had already noted, there is a substantial unexplained genetic component.

This highlights the paper’s limitation, as once the specific idea that heritability of IQ is a substantial cause of intergenerational income inequality has been dented, the identification of other (but unknown) genetic factors leaves open a raft of questions about income heritability. Using Bowles and Gintis’s conservative estimates, we still have 25 per cent of income heritability being put down to genetic factors without any understanding of what these traits are and the extent of the role they play.

In their conclusion, Bowles and Gintis touch on whether policy interventions might be based on these results. They are somewhat vague in their recommendations, but suggest that rather than seeking zero intergenerational correlation, interventions should target correlations that are considered unfair. They suggest, as examples, that there are large majorities supporting compensation for inherited disabilities while intervention for good looks is not appropriate.

One thing I find interesting in an analysis of heritability such as this is that over a long enough time horizon, to the extent that someone with a trait has a fitness advantage (or disadvantage), the gene(s) behind the trait will move to fixation (or be eliminated) as long as heritability is not zero. The degree of heritability is relevant only to the rate at which this occurs and only in a short-term context. The obvious question then becomes (which is besides the point of this post) whether IQ currently yields a fitness advantage. Over a long enough time period, variation will tend to eliminate itself and Bowles and Gintis would be unable to find any evidence of IQ heritability affecting income across generations.

**This a cross-post from my blog Evolving Economics, which is my usual blogging home.

Bowles, S., & Gintis, H. (2002). The Inheritance of Inequality Journal of Economic Perspectives, 16 (3), 3-30 DOI: 10.1257/089533002760278686

a