Where do morals come from?

Share on FacebookShare on Google+Email this to someoneTweet about this on Twitter

Review of “Braintrust. What Neuroscience Tells Us about Morality”, by Patricia S. Churchland

The question of “where morals come from” has exercised philosophers, theologians and many others for millennia. It has lately, like many other questions previously addressed only through armchair rumination, become addressable empirically, through the combined approaches of modern neuroscience, genetics, psychology, anthropology and many other disciplines. From these approaches a naturalistic framework is emerging to explain the biological origins of moral behaviour. From this perspective, morality is neither objective nor transcendent – it is the pragmatic and culture-dependent expression of a set of neural systems that have evolved to allow our navigation of complex human social systems.


“Braintrust”, by Patricia S. Churchland, surveys the findings from a range of disciplines to illustrate this framework. The main thesis of the book is grounded in the approach of evolutionary psychology but goes far beyond the just-so stories of which that field is often accused by offering not just a plausible biological mechanism to explain the foundations of moral behaviour, but one with strong empirical support.

The thrust of her thesis is as follows:

Moral behaviour arose in humans as an extension of the biological systems involved in recognition and care of mates and offspring. These systems are evolutionarily ancient, encoded in our genome and hard-wired into our brains. In humans, the circuits and processes that encode the urge to care for close relatives can be co-opted and extended to induce an urge to care for others in an extended social group. These systems are coupled with the ability of humans to predict future consequences of our actions and make choices to maximise not just short-term but also long-term gain. Moral decision-making is thus informed by the biology of social attachments but is governed by the principles of decision-making more generally. These entail not so much looking for the right choice but for the optimal choice, based on satisfying a wide range of relevant constraints, and assigning different priorities to them.

This does not imply that morals are innate. It implies that the capacity for moral reasoning and the predisposition to moral behaviour are innate. Just as language has to be learned, so do the codes of moral behaviour, and, also like language, moral codes are culture-specific, but constrained by some general underlying principles. We may, as a species, come pre-wired with certain biological imperatives and systems for incorporating them into decisions in social situations, but we are also pre-wired to learn and incorporate the particular contingencies that pertain to each of us in our individual environments, including social and cultural norms.

This framework raises an important question, however – if morals are not objective or transcendent, then why does it feel like they are? This is after all, the basis for all this debate – we seem to implicitly feel things as being right or wrong, rather than just intellectually being aware that they conform to or violate social norms. The answer is that the systems of moral reasoning and conscience tap into, or more accurately emerge from ancient neural systems grounded in emotion, in particular in attaching emotional value or valence to different stimuli, including the imagined consequences of possible actions.

This is, in a way, the same as asking why does pain feel bad? Couldn’t it work simply by alerting the brain that something harmful is happening to the body, which should therefore be avoided? A rational person could then take an action to avoid the painful stimulus or situation. Well, first, that does not sound like a very robust system – what if the person ignored that information? It would be far more adaptive to encourage or enforce the avoidance of the painful stimulus by encoding it as a strong urge, forcing immediate and automatic attention to a stimulus that should not be ignored and that should be given high priority when considering the next action. Even better would be to use the emotional response to also tag the memory of that situation as something that should be avoided in the future. Natural selection would favour genetic variants that increased this type of response and select against those that decoupled painful stimuli from the emotional valence we normally associate with them (they feel bad!).

In any case, this question is approached from the wrong end, as if humans were designed out of thin air and the system could ever have been purely rational. We evolved from other animals without reason (or with varying degrees of problem-solving faculties). For these animals to survive, neural systems are adapted to encode urges and beliefs in such a way as to optimally control behaviour. Attaching varying levels of emotional valence to different types of stimuli offers a means to prioritise certain factors in making complex decisions (i.e., those factors most likely to affect the survival of the organism or the dissemination of its genes).

For humans, these important factors include our current and future place in the social network and the success of our social group. In the circumstances under which modern humans evolved, and still to a large extent today, our very survival and certainly our prosperity depend crucially on how we interact and on the social structures that have evolved from these interactions. We can’t rely on tooth and claw for survival – we rely on each other. Thus, the reason moral choices are tagged with strong emotional valence is because they evolved from systems designed for optimal control of behaviour. Or, despite this being a somewhat circular argument, the reason they feel right or wrong is because it is adaptive to have them feel right or wrong.

Churchland fleshes out this framework with a detailed look at the biological systems involved in social attachments, decision-making, executive control, mind-reading (discerning the beliefs and intentions of others), empathy, trust and other faculties. There are certain notable omissions here: the rich literature on psychopaths, who may be thought of as innately deficient in moral reasoning, receives surprisingly little attention, especially given the high heritability of this trait. As an illustration that the faculty of moral reasoning relies on in-built brain circuitry, this would seem to merit more discussion. The chapter on Genes, Brains and Behavior rightly emphasises the complexity of the genetic networks involved in establishing brain systems, especially those responsible for such a high-level faculty as moral reasoning. The conclusion that this system cannot be perturbed by single mutations is erroneous, however. Asking what does it take, genetically speaking, to build the system is a different question from what does it take to break it. Some consideration of how moral reasoning emerges over time in children would also have been interesting.

Nevertheless, the book does an excellent job of synthesising diverse findings into a readily understandable and thoroughly convincing naturalistic framework under which moral behaviour can be approached from an empirical standpoint. While the details of many of these areas remain sketchy, and our ignorance still vastly outweighs our knowledge, the overall framework seems quite robust. Indeed, it articulates what is likely a fairly standard view among neuroscientists who work in or who have considered the evidence from this field. However, one can presume that jobbing neuroscientists are not the main intended target audience and that both the details of the work in this field and its broad conclusions are neither widely known nor held.

The idea that right and wrong – or good and evil – exist in some abstract sense, independent from humans who only somehow come to perceive them, is a powerful and stubborn illusion. Indeed, for many inclined to spiritual or religious beliefs, it is one area where science has not until recently encroached on theological ground. While the Creator has been made redundant by the evidence for evolution by natural selection and the immaterial soul similarly superfluous by the evidence that human consciousness emerges from the activity of the physical brain, morality has remained apparently impervious to the scientific approach. Churchland focuses her last chapter on the idea that morals are absolute and delivered by Divinity, demonstrating firstly the contradictions in such an idea and, with the evidence for a biological basis of morality provided in the rest of the book, arguing convincingly that there is no need of that hypothesis.

Mirrored from Wiring the Brain.


  1. Does she address the game theoretic analysis of egoism vs. altruism?

  2. She does not go into details on that specific topic, but the general question of the kinds of selective forces that might drive apparently altruistic behaviour arises. My own feeling is the game-theoretic models favoured by economists are under-parameterised and too focused on short term outcomes to be truly informative when discussing the kinds of social pressures and imperatives that might have favoured the emergence of “moral” behaviour over evolutionary timescales.

  3. The extent to which language is culturally defined seems overstated, in the sense that yes, the phonology and the specific syllables differ by language groups, but there is a strong underlying similarity, in my mind, between all human languages. Eg, they all have grammar and use either SVO, SOV, VSO, etc, and in some cases different languages will use two grammatical styles. So, it seems that only superficial (for some definition of superficial) details differ, and babies can learn pretty much any language. This, in my mind, argues that we are more than just primed to learn language and that we are tuned to the small number of alternatives, and that there is extensive hardware support.

    Similarly, I would think that the ‘moral’ problems we face are similar from culture to culture. Betrayal of friends and relatives is the same in any culture. It would seem to me that the differences will be along the lines of differences between groups that have spent a longer time under selection for very large and complex societies (eg, China over the last 4,000 years) and those that have spent much of those same 4,000 years in small-scale cultures, as well as between males and females, where the problems have, until very recently, been very different, it seems to me. However, I would expect that the base ‘moral’ hardware is the same among all human groups with some groups elaborating on this base under different selection pressures.

    It seems that perhaps I should read the book because, notwithstanding my comments, much of what you have said Churchland says sounds eminently sensible.

  4. Thanks for your comment Richard. I actually was attempting to say exactly what you just said but may have over-emphasised the culture-dependent differences. As you say, these are probably more superficial and overlaid on strong constraining principles that are hard-wired in the brain’s circuitry, both for morals and for language.

  5. Nicely written!

    “Love thy neighbor as thy self” came to mind while reading this sentence: “Moral behaviour arose in humans as an extension of the biological systems involved in recognition and care of mates and offspring.”

    Also, concerning the “universality” of (certain) moral values: the ideal of an international order based on reason and justice in place of force and fraud makes sense in the nuclear age, a matter of collective, enlightened self-interest. Or at least it could be seen that way. The Hebraic idea of God was originally conceived in those terms (minus nuclear weapons, but as a matter of enlightened self-interest) at least by my close reading of the Patriarchal narratives in Genesis (not the Mosaic period later). The early Hebrews were a small, weak trading people who depended on their reputation for honesty and fair-dealing to survive in a matrix of powerful city-states among whom they lived and moved and “had their being.”

    The idea of a just God who would protect the weak was later taken up by the masses of ordinary peasants and craftsmen as a matter of (perhaps unenlightened) self-interest. At least this is a possible reading.

    But again, that was a great piece of synthesis.

  6. I completely fail to see how the evolution of our sense of morality (assuming it is evolved) disproves the existence of absolute objective morality. That’s equivalent to saying that because we are evolved to discern truth from falsehood, there is no such thing as objective truth.

  7. Jonathan, thanks for your comment. The arguments I make above and that Churchland makes in her book do not disprove the existence of abstract moral truths. They simply present a plausible, coherent and parsimonious scientific framework that explains how a moral sensibility could have arisen due to the evolutionary pressures associated with our complex social structure. In light of that framework, there is no need to invoke the idea that moral truths exist somewhere in an abstract and objective sense, with all the philosophical complications that entails.

    That does not mean we are a slave to our evolutionarily-programmed imperatives and that all apparently moral choices are simply the expression of those subconscious constraints (like, “care for your young” or “don’t do things that will get you socially ostracised”). The cerebral cortex allows us to transcend the drives of evolution. Because we have the capacity for abstract thought, we can think about morality at a remove from biological drives and consider whether we can come up with societal rules that might be better – i.e., improve the well-being of more people – than the ones we come programmed with. Even if we do that, however, these will still be pragmatically derived rules or guidelines based on our consideration of the factors that will best enhance fairness, societal stability, prosperity, etc.

    None of those will be a “truth” waiting to be discovered. The problem with that idea is that you then have to say where they come from – if it’s from a God, then you are left with the problem posed by Socrates: “Are things good because God says they are?” (That seems a bit open to the whims and caprices of the deity, especially if you look at some of the rather bizarre proscriptions associated with various religions). “Or does God say they are good because they ARE good?” If so, that still leaves the question of how they are defined as good.

    Morality is about making optimal choices, taking many different parameters into consideration, not about discovering the “fact” of which choice is correct. Your analogy with discerning truth from falsehood (I presume in others’ statements) thus breaks down because in that case there is a truth to be discerned.

  8. Thanks, kjmtchi. I don’t really follow your reasoning. You say “morality is about making optimal choices”, but then end the sentence by saying it’s “not about discovering the ‘fact’ of which choice is correct.” To me that is a contradiction. Is the optimal choice not the correct choice? How do you think those who believe in an objective morality make their choices? Isn’t it also about finding the optimal choices, taking into consideration different parameters? What makes your parameters reasonable and their parameters unreasonable?

  9. The argument is that the parameters come from us – they do not exist without us. It is not wrong to kill someone in an abstract sense, independent of human society. This is not a truth that existed that was discovered by or revealed to humans. It is an evolutionarily optimal strategy within human societies and one which has been codified by cultural structures. It is not wrong because God says it is wrong. And it is not wrong because that simply is a philosophical truth that can be deduced logically.

    I think those who believe in an objective morality make their choices the same way as those who do not. I just think they are mistaken about the existence of objective moral truths (except in the sense of being something that society agrees on – but that is different from how I am using the term, to mean having an abstract existence separate from human culture and society).

    In the end, there are many moral choices that are quite obviously optimal, regardless of your source of reasoning. But if someone bases their moral judgments on what it says in a sacred text, then I would say that is unreasonable, especially as many of the proscriptions within such texts do not appear to be particularly moral to most people now.

Leave a Reply