‘If God does not exist, everything is permitted’. Dostoevsky never actually wrote that line, though so often is it attributed to him that he may as well have. It has become the almost reflexive response of believers when faced with an argument for a godless world. Without religious faith, runs the argument, we cannot anchor our moral truths or truly know right from wrong. Without belief in God we will be lost in a miasma of moral nihilism.
In recent years, the riposte of many to this challenge has been to argue that moral codes are not revealed by God but instantiated in nature, and in particular in the brain. Ethics is not a theological matter but a scientific one. Science is not simply means of making sense of facts about the world, but also of values, because values are in essence facts in another form.
There is a spectrum of views about how science can throw light on moral values. At the soft end of this spectrum is the suggestion that our capacity for moral thought lies in our evolutionary history, the evidence for which derives primarily from primatology and evolutionary and developmental psychology. The degree to which our capacity for moral thought is the product of natural selection remains a matter of debate. In principle, however, the idea that our ability to think in terms of right and wrong may, in part, have evolutionary roots should not be controversial.
At the hard end of the spectrum is the claim, not just that our capacity for moral thought may have been selected for, but also that moral rules are written into the brain. Some, like the cognitive psychologist Marc Hauser, whose work is currently under scrutiny by Harvard authorities for the possible fraudulent manipulation of experimental data, argue that we possess a ‘moral organ’ akin to Naom Chomsky’s language organ, ‘equipped with universal moral grammar, a toolkit for building specific moral systems.’ ‘Our moral instincts’, Hauser believes, ‘are immune to explicitly articulated commandments handed down by religions and governments.’
Others, such as the philosopher Sam Harris, suggest that values are facts about ‘states of the human brain’ and so to study morality we have to study neural states. ‘The wellbeing of humans and animals must depend on states of the world and on states of their brains’, Harris writes, ‘and science represents our most systematic means of understanding these states’. Science, and neuroscience, do not simply explain why we might respond in particular ways to equality or to torture but also whether equality is a good, and torture morally acceptable. For such neuromoralists, the best way to distinguish between good and evil is in an fMRI scanner.
I want to explore what I’ve called the hard end of this spectrum, the claims of those whom we might dub the ‘neuromoralists’, and in particular the argument that through neuroscience we can best define our moral values. The arguments of the hard end remain controversial and do not constitute anything close to a consensus. They are nevertheless useful to examine because they provide insights into the broader problems underlying many of the social and philosophical claims of contemporary neuroscience. I want:
1. Briefly to survey the arguments that morality can be instantiated in the brain.
2. To show how such a naturalistic claim, as much as a theological claim that morality can be instantiated in God, is confronted by Plato’s famous Euthyphro dilemma: that either morality is an arbitrary set of rules or it requires an independent gauge of right and wrong.
3. To suggest that underlying the Euthyphro dilemma is a deeper claim: morality can only be non-arbitrary if it is self-created by rational agents, that is by subjects as opposed to objects.
4. And, finally, to explore why not just neuromoralists but contemporary science more broadly finds it difficult to engage with the idea of humans as subjects.
The neuromoralist argument is rooted in three basic claims:
1. Moral values are moral facts. In his new book, The Moral Landscape: How Science Can Determine Human Values, Sam Harris writes that ‘Questions about values are really questions about the well-being of conscious creatures. Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, the effects of specific laws on human relationships, the neurophysiology of happiness and suffering, etc.’
2. Moral facts can be discerned from the way our brains work and the evolutionary reasons by which they have evolved to work in this fashion. Philosopher Patricia Churchland argues in her forthcoming book, Braintrust: What Neuroscience Tells Us About Morality that ‘that morality originates in the neurobiology of attachment and bonding’, and in the ‘oxytocin-vasopressin network in mammals [that] can be modified to allow care to be extended beyond one’s litter of juveniles’.
3. The scientific study of brain process and the evolutionary pressures that underlie them should be the basis on which we decide between moral choices. Where there are disagreements over moral questions, Sam Harris writes, ‘science will… decide’ which view is right ‘because the discrepant answers people give to them translate into differences in our brains, in the brains of others and in the world at large.’ Bioethicist Julian Savulescu, Director of the Uehiro Center for Practical Ethics at Oxford, takes it further. Since ‘our moral dispositions are based in our biology’ and hence ‘malleable by biomedical and genetic means’, so not only should we look to science determine right and wrong but also to make humans more right than wrong. ‘Safe, effective moral enhancements’, by which Savulescu means genetic, pharmacological and neurophysiological interventions, should he insists ‘be obligatory, like education or fluoride in the water.’
The core of the argument, then, is that science, and in particular evolutionary biology and neuroscience, can bridge the gap between ought and is, by turning moral claims into scientific facts; and that it should be imperative upon us to use scientific data both to set moral norms and to make humans more moral.
There is, of course, a voluminous philosophical literature on the debate between moral realists and moral anti-realists. It is not a debate into which I intend stepping, at least in this talk. What is striking, though, given the insistence on the factual character of moral values, is the disjuncture in neuromoralist discourse between fact and value, between is and ought. Take Sam Harris’ claim that morality ‘really relates to the intentions and behaviours that affect the well-being of conscious creatures’ and so can ‘translate into facts that can be scientifically understood’.
Why should morality relate solely to the ‘well-being of conscious creatures’? Why not, as some insist, to the well-being of the planet? Or of ecosystems? Or, as others argue, to the well-being of humans, as autonomous moral agents, rather than to that of all conscious creatures? I can think of rational arguments that can help distinguish between these claims. But I can think of no empirical test that can do so. Nor does Harris suggest any. And if there is no such test, it is difficult to know how it is a fact that can be scientifically understood.
Let us grant that morality does relate solely to the well-being of conscious creatures. How then do we define well-being? This of course has been at the heart of more than two millennia of philosophical debate. But what scientific test can be used to define what constitutes well-being?
Neuromoralists argue that well-being can be defined through data gained through fMRI scans, physiological observation, pharmacological measures. Such studies may be able tell us which brain states, neurotransmitters or hormones calibrate with particular conditions in the external world. But whether those states, transmitters or hormones are seen as indicators of well-being depends on whether we consider those real-life conditions as expressions of well-being.
There are, for example, a host of studies that demonstrate the ways in which cooperation and trust are sensitive to oxytocin levels in the brain. In one famous experiment conducted by the neuroeconomist Michael Kosfield, two players are pitted against each other in a decision-making game called ‘Trust’. One is an ‘investor’, the other a ‘trustee’. Each player receives $12 of real money, a proportion of which the investor is able to invest with the trustee. The experimenter adds twice as much again to any amount the investor decides to invest before giving it to the trustee. The trustee can then return however much he wishes back to the investor. After that the game is over and both players keep whatever money they have gained.
The more money the investor invests, the more money is in the pot, because the experimenter will triple every investment. But the investor has to trust that the trustee will return a goodly part of that investment – otherwise he would lose out and would have been better off simply keeping his original $12. The trustee has in turn to decide whether to be selfish and keep all the invested money or to be moral return what he thinks is a fair amount to the investor. The game is therefore a way of investigating trust and trustfulness. Kosfield discovered that giving players oxytocin significantly increased the amount of money they were willing to invest – in other words, they showed greater trust in the other player to play fair – though it made no difference to the amount the trustee returned to the investor.
Neuromoralists have seized upon such studies as demonstrations of the power of science in determining our understanding of what constitutes the good and well-being. Studies such as Kosfield’s provide important insights both into social psychology and into the physical workings of the brain. But they reveal little about what constitutes, or rather what should constitute, morality, fairness or well-being. Indeed, we can only make sense of the data from an experiment like Kosfield’s because we already possess accepted norms of what is good, fair and conducive to human flourishing.
In the end, what is striking about the claims of the neuromoralists is the irrelevance of the neuroscience they expound, important though it is to our understanding of brain and behaviour, to the moral arguments they adduce. Consider, for instance, Patricia Churchland’s argument that it is a ‘false dilemma’ to claim that ‘either God secures the moral law or morality is an illusion’ because ‘Morality is grounded in our biology, in our capacity for compassion and our ability to learn and figure things out’. She then continues:
As a matter of actual fact some social practices are better than others, and genuine assessments cane be made against the standard of how well or poorly they serve human well-being. Allowing women to vote has, despite dire predictions of disaster, turned out reasonably well, whereas the laws allowing private citizens to own assault weapons in the United States has had quite a lot of deleterious consequences. Abolition of slavery, though a fairly recent development, is surely, as a matter of well-being, better than slavery.
All of which is true. But there is here no relationship between the claim that ‘morality is grounded in our biology’ and the fact that ‘some social practices are better than others’. Slavery was abolished, women won the vote without science, still less neuroscience, having to establish that enslavement is bad or equality good. Indeed the ‘neuroscience’ of the day, such as phrenology and craniology, was used more often than not to argue in favour of racial hierarchy and male superiority. Such disciplines are, of course, we recognize now, 'pseudosciences'. At that time, however, the study of bumps on the head or the shape of skulls was taken as seriously as today we take the study of fMRI scans or oxytocin levels.
The attempt to link neuroscience and morality does nothing more than provide a model illustration of Hume’s famous observation in A Treatise of Human Nature:
In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning… when all of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence.
The desire to root morality in neuroscience derives from an aspiration to demonstrate the redundancy of religion to ethical thinking. The irony, though, is that the classic argument against looking to God as the source of moral values – Plato’s Euthyphro dilemma – is equally applicable to the claim that the brain is the source of moral values.
In his dialogue Euthyphro, Plato sets up a discussion between Socrates and Euthyphro, who is about to prosecute his father for the murder of one of his servants. Socrates is shocked by Euthyphro’s action, which appears to disregard both convention and his obligations to kin, and wants to know how Euthyphro distinguishes between the pious and the impious, the good and the bad.
Euthyphro provides a series of definitions each of which Socrates knocks down. Socrates’ key question is this: Do the gods love the good because it is good, or is it good because it loved by the gods? Unless the gods love something for no good reason, then they must love something as pious because it inherently possesses value. But if it inherently possesses value, then it does so independently of the gods.
Or as Leibniz asked at the beginning of the eighteenth century, if it is the case that whatever God thinks, wants or does is good by definition, then ‘what cause could one have to praise him for what he does if in doing something quite different he would have done equally well?’ If, on the other hand, God recognizes what is good and promotes it because of its inherent goodness, then goodness must exist independently of God. It might now sense now to revere God's goodness. But God is no longer the source of that goodness, nor do we need to look to God to discover that which is good.
The same question can be asked of contemporary neuromoralism. If well-being is defined simply in biological terms, by the existence of certain neural states, or by the presence of particular hormones or neurotransmitters, or because of certain evolutionary dispositions, then the notion of well-being is arbitrary. If such a definition is not to be arbitrary, then it can only be because the neural state, or hormonal or neurotransmitter level, or the evolutionary disposition, correlates with a notion of well-being or of the good, which has been arrived at independently.
What could such an independent gauge of goodness be, either in the case of God-defined morality or in the case of brain-defined morality? The answer is the same in both: the existence of humans as autonomous, moral agents. The significance of the Euthyphro dilemma is that it embodies a deeper claim: that concepts such as goodness, happiness and well-being, only have meaning with respect to a conscious, rational agent.
But, a neuromoralists might ask, ‘Am I am not my brain? In measuring neural states am I not measuring that which makes me a conscious, moral being? And in distinguishing ‘me’ and my brain are you not restoring the ghost to the machine?’
The answer is ‘No’ to all three questions. To see why, let us imagine for the moment that it is possible to quantify precisely the neural state that corresponds to a specific condition of well-being: for instance, being in love or Liverpool FC winning the English Premier League.& And let us imagine, too, that we possess the technology to manipulate the brain so as to be able create that precise neural state that corresponds to that condition of well-being. Would such a neural enhancement create a good morally equivalent to a real-world state in which one actually was in love or Liverpool actually won the Premiership?
Most people would probably think not. That is, most people would think that they would rather actually be in love than simply their brain be in a state that it would be in were they in love. And most Liverpool fans would think it far better for Liverpool really to have won the Premiership than for its supporters to be in state in which it feels as if Liverpool has (though many may also feel that that is the closest they may come to a Premiership title).
In other words, for most people there is more to me than simply my brain. ‘I’ am embodied not just in my neural activity, nor even just in the activity of my body, but in the relationships with individuals I know, and more broadly with the communities within which I exist, including the broadest community of all – humanity itself. It also suggests that the good is defined not simply by being in a state of well-being but also in the process of getting to that state, and in the transformation of relationships through that process.
Both morality and humanity appear differently to neuromoralist eyes. Consider, for instance, the argument for moral enhancement. The human capacity for morality is ‘limited’, Julian Savulescu suggests, because evolution favoured a tribal, short-sighted sense of morality that is insufficient to deal with the problems of the 21st century, from climate change to terrorism. But space age technology can put right our Stone Age morality: ‘Our sense of fairness and our basic moral dispositions, like our patterns of sexual behavior and our relationships’, Savulescu has suggested, ‘have strong biological contributors capable of being understood and being manipulated or changed’.
A combination of positive eugenics and neurological intervention will, he believes, provide for ‘a better understanding of human moral limitation’ and allow us to ‘inculcate certain values and certain forms of morality’ rather than be ‘neutral as we traditionally have been in liberal societies to different conceptions of the good life, religious traditions and different versions of morality’. Such intervention can enhance good dispositions such as altruism, generosity and compassionate, and flush out unacceptable ones such as aggression and xenophobia. Drugs or neurosurgery could help purge racists of their immoral views, and neurotransmitters such as oxytocin could be added to the water supply to improve the general level of social trust.
Even leaving aside the question of the morality of eugenics or of moral neuro-enhancement, Savulescu’s is an argument that raises a boxful of questions and more. Adding fluoride to water is a good because stronger teeth enamel is good in all circumstances. But is it a good that trust be enhanced in all circumstances? After all, would not authoritarian regimes and even democratic politicians welcome a more trustful, and therefore a less questioning, population?
Is aggression always bad? Is the aggression that Iranian protestors showed when they took to the streets of Teheran equivalent to the aggression that the police demonstrated when they beat up those protestors? And if not does it make any sense to suggest, as Savulescu does, that ‘our futures may depend upon making ourselves wiser and less aggressive’, including through the ‘compulsory’ use of serotonin?
Savulescu never asks such questions, probably because the neuromoralist understanding of what it is to be moral, and what it is to be human, does not allow for such questions. Patricia Churchland, to her credit, takes a more Aristotelian view of morality, as habits to be inculcated and learned. Neuroscience, she suggests, is the best means of discovering the most appropriate habits to inculcate and learn.
Most neuromoralists, however, such as Savulescu and Harrris, ironically adopt a much more Biblical view of morality. Moral norms do not emerge through a process of social engagement and collective conversation, nor in the course of self-improvement, but rather are laws to be revealed from on high and imposed upon those below.
Science will tell us which conception of the good life is objectively true, and scientists will inculcate such values into the masses, by tweaking the brain, lacing the water, handing out ethics pills or simply by keeping an eye upon our behaviour.
Sam Harris, for instance, relishes the prospect of governments and corporations utilizing neuro-scanning technology to detect if people are lying, and so enforcing no-lie zones. ‘Thereafter, civilized men and women might share a common presumption’, he writes, ‘that whenever important conversations are held, the truthfulness of all participants will be monitored… Many of us might feel no more deprived to lie during a job interview or at a press conference than we currently feel deprived of the freedom to remove our pants in the supermarket.’
Not for Harris the moral virtues of freedom and liberty. Science has decreed that truthfulness, at least truthfulness to those in power, possesses a moral premium.
The neuromoralists’ moral Utopia reminds one of nothing so much as a modern, high-tech version of Plato’s Republic, that best of societies in which ‘the desires of the inferior many are controlled by the wisdom and desires of the superior few.’ Unlike a democracy, in which every citizen ruler is ‘always surrendering rule over himself to which ever desire comes along’, leading to an anything-goes morality (a fear that one finds in every neuromoralist tract), the rulers of Plato’s Republic are especially wise and rational philosopher kings, in whose Utopia a special breeding programme ensures that only the best marry the best, in which deficient children are culled, and in which all undergo a strict programme of education, indoctrination and discipline. No doubt, had Plato known of oxytocin and neural scanners, they too would have had their place in the Republic.
The neuromoralists’ Utopias are clearly fantasies. There is no prospect, at least in the foreseeable future, of oxytocin being added to the water nor of Nick Griffin being force-fed ‘love thy naeighbour’ pills. And yet, in an age in which many people increasingly look to science for answers to social and moral questions, in which fMRI scan results are beginning to be used as evidence in criminal cases and in which, as this conference itself reveals, everyone from governments to advertisers is becoming obsessed with the neurozone, it pays to be attentive to neuromoralist fantasies. What neuormoralists provide are not blueprints for a coming Republic but fleshed out versions of themes with which our age is already preoccupied.
Behind the fantasy and the bluster, what neuromoralism expresses is a paradox of modern science. The success of science in understanding nature has created problems for its understanding of human nature. The success of science derives from the way that it has ‘disenchanted’ the natural world, to borrow Max Weber’s phrase, stripping the universe of purpose and desire, and rendering it – or at least attempted to render it – as a clockwork universe.
Humans, too, have become part of the natural order, and hence objects that can be understood as any other in the mechanical world. But the very process of viewing humans naturalistically has also created a seeming chasm between humanity and nature.
In the pre-modern world, thought was as much part of the external world as it was of the mind inside. For both Plato and Aristotle, for instance, there is no clearcut distinction between thought and the object of thought. In the post-Cartesian world, however, the subject becomes that which thinks and acts, the object that upon which thought and action bear.
Science made humanity part of the natural order; but it also established a distinction between a humanity that is a thinking subject and a nature that presents itself to thought but is itself incapable of thought. It presupposed a view of humanity as the active subject that has power over, and control of, the object of its attention, nature.
The very process that makes humans part of the natural order also, then, takes humans outside of nature. This is, in Kate Soper’s words, ‘the paradox of humanity’s simultaneous immanence and transcendence’. Nature ‘is that which Humanity finds itself within, and to which in some sense its belongs, and also that from which it seems excluded in the very moment it reflects upon either its otherness or its belongingness.’
To put it another way, it is difficult to treat humans as disenchanted creatures. Humans possess purpose and agency, self-consciousness and will, qualities that science has largely expunged from the rest of nature, but qualities without which science itself would not be possible. Nor would morality. Only by accepting humans not simply as objects but also as subjects - that is, as moral agents capable of taking responsibility for our actions and who, through history, can develop our moral sensibilities – can morality make sense. Humans are the bridge between facts and values.
Yet it is precisely this aspect of humans as both object and subject that neuromoralism seeks to deny. In The Moral Landscape, Sam Harris makes a ‘scientific’ argument against capital punishment and the morality of retributionist justice, insisting that ‘The urge for retribution…seems to depend upon our not seeing the underlying causes of human behavior’. ‘The men and women on death row have some combination of bad genes, bad parents, bad ideas and bad luck’- none of which they are responsible for.
There is, however, Harris insists, a scientific argument for incarcerating criminals, even though they are not responsible for their acts, because it helps protect society. The analogy that he uses to make this point is both curious and telling. ‘Clearly’, he writes, ‘we need to build prisons for people who are intent on harming others. But if we could incarcerate earthquakes and hurricanes for their crimes, we would build prisons for them as well.’
In prescientific societies the distinction between humans and earthquakes and hurricanes was often blurred because all nature was imbued with agency. In the world of neuromoralism, the distinction between humans and earthquakes and hurricanes has become blurred because nothing is imbued with agency. A truly scientific vision would recognize that what makes humans different is our existence as both objects and subjects, and that one of the facts from which moral values emerge is precisely that existence as both.