Brain drain

Neuroscience wants to be the answer to everything. It isn’t

17 March 2012

6:00 PM

17 March 2012

6:00 PM

Neuroscience wants to be the answer to everything. It isn’t

There are many reasons for believing the brain is the seat of consciousness. Damage to the brain disrupts our mental processes; specific parts of the brain seem connected to specific mental capacities; and the nervous system, to which we owe movement, perception, sensation and bodily awareness, is a tangled mass of pathways, all of which end in the brain. This much was obvious to Hippocrates. Even Descartes, who believed in a radical divide between soul and body, acknowledged the special role of the brain in tying them together.

The discovery of brain imaging techniques has given rise to the belief that we can look at people’s thoughts and feelings, and see how ‘information’ is ‘processed’ in the head. The brain is seen as a computer, ‘hardwired’ by evolution to deal with the long vanished problems of our hunter-gatherer ancestors, and operating in ways that are more transparent to the person with the scanner than to the person being scanned. Our own way of understanding ourselves must therefore be replaced by neuroscience, which rejects the whole enterprise of a specifically ‘humane’ understanding of the human condition.

In 1986 Patricia Churchland published Neurophilosophy, arguing that the questions that had been discussed to no effect by philosophers over many centuries would be solved once they were rephrased as questions of neuroscience. This was the first major outbreak of a new academic disease, which one might call ‘neuroenvy’. If philosophy could be replaced by neuroscience, why not the rest of the humanities, which had been wallowing in a methodless swamp for far too long? Old disciplines that relied on critical judgment and cultural immersion could be given a scientific gloss when rebranded as ‘neuroethics’, ‘neuroaesthetics’, ‘neuromusicology’, ‘neurotheology’, or ‘neuroarthistory’ (subject of a book by John Onians). Michael Gazzaniga’s influential study, The Ethical Brain, of 2005, has given rise to ‘Law and Neuroscience’ as an academic discipline, combining legal reasoning and brain imagining, largely to the detriment of our old ideas of responsibility. One by one, real but non-scientific disciplines are being rebranded as infant sciences, even though the only science involved has as yet little or nothing to say about them.

It seems to me that aesthetics, criticism, musicology and law are real disciplines, but not sciences. They are not concerned with explaining some aspect of the human condition but with understanding it, according to its own internal procedures. Rebrand them as branches of neuroscience and you don’t necessarily increase knowledge: in fact you might lose it. Brain imaging won’t help you to analyse Bach’s Art of Fugue or to interpret King Lear any more than it will unravel the concept of legal responsibility or deliver a proof of Goldbach’s conjecture; it won’t help you to understand the concept of God or to evaluate the proofs for His existence, nor will it show you why justice is a virtue and cowardice a vice. And it cannot fail to encourage the superstition which says that I am not a whole human being with mental and physical powers, but merely a brain in a box.

The new sciences in fact have a tendency to divide neatly into two parts. On the one hand there is an analysis of some feature of our mental or social life and an attempt to show its importance and the principles of its organisation. On the other hand, there is a set of brain scans. Every now and then there is a cry of ‘Eureka!’ — for example when Joshua Greene showed that dilemmas involving personal confrontation arouse different brain areas from those aroused by detached moral calculations. But since Greene gave no coherent description of the question, to which the datum was supposed to suggest an answer, the cry dwindled into silence. The example typifies the results of neuroenvy, which consist of a vast collection of answers, with no memory of the questions. And the answers are encased in neurononsense of the following kind:


‘The brains of social animals are wired to feel pleasure in the exercise of social dispositions such as grooming and co-operation, and to feel pain when shunned, scolded, or excluded. Neurochemicals such as vasopressin and oxytocin mediate pair-bonding, parent-offspring bonding, and probably also bonding to kith and kin…’ (Patricia Churchland).

As though we didn’t know already that people feel pleasure in grooming and co-operating, and as though it adds anything to say that their brains are ‘wired’ to this effect, or that ‘neurochemicals’ might possibly be involved in producing it. This is pseudoscience of the first order, and owes what scant plausibility it possesses to the fact that it simply repeats the matter that it fails to explain. It perfectly illustrates the prevailing academic disorder, which is the loss of questions.

Traditional attempts to understand consciousness were bedevilled by the ‘homunculus fallacy’, according to which consciousness is the work of the soul, the mind, the self, the inner entity that thinks and sees and feels and which is the real me inside. We cast no light on the consciousness of a human being simply by redescribing it as the consciousness of some inner homunculus. On the contrary, by placing that homunculus in some private, inaccessible and possibly immaterial realm, we merely compound the mystery.

As Max Bennett and Peter Hacker have argued (Philosophical Foundations of Neuroscience, 2003), this homunculus fallacy keeps coming back in another form. The homunculus is no longer a soul, but a brain, which ‘processes information’, ‘maps the world’, ‘constructs a picture’ of reality, and so on — all expressions that we understand, only because they describe conscious processes with which we are familiar. To describe the resulting ‘science’ as an explanation of consciousness, when it merely reads back into the explanation the feature that needs to be explained, is not just unjustified — it is profoundly misleading, in creating the impression that consciousness is a feature of the brain, and not of the person.

Perhaps no instance of neurononsense has been more influential than Benjamin Libet’s ingenious experiments which allegedly ‘prove’ that actions which we experience as voluntary are in fact ‘initiated’ by brain events occurring a short while before we have the ‘feeling’ of deciding on them. The brain ‘decides’ to do x, and the conscious mind records this decision some time later. Libet’s experiments have produced reams of neurobabble. But the conclusion depends on forgetting what the question might have been. It looks significant only if we assume that an event in a brain is identical with a decision of a person, that an action is voluntary if and only if preceded by a mental episode of the right kind, that intentions and volitions are ‘felt’ episodes of a subject which can be precisely dated. All such assumptions are incoherent, for reasons that philosophers have made abundantly clear.

So just what can be proved about people by the close observation of their brains? We can be conceptualised in two ways: as organisms and as objects of personal interaction. The first way employs the concept ‘human being’, and derives our behaviour from a biological science of man. The second way employs the concept ‘person’, which is not the concept of a natural kind, but of an entity that relates to others in a familiar but complex way that we know intuitively but find hard to describe. Through the concept of the person, and the associated notions of freedom, responsibility, reason for action, right, duty, justice and g
uilt, we gain the description under which human beings are seen, by those who respond to them as they truly are. When we endeavour to understand persons through the half-formed theories of neuroscience we are tempted to pass over their distinctive features in silence, or else to attribute them to some brain-shaped homunculus inside. For we understand people by facing them, by arguing with them, by understanding their reasons, aspirations and plans. All of that involves another language, and another conceptual scheme, from those deployed in the biological sciences. We do not understand brains by facing them, for they have no face.

We should recognise that not all coherent questions about human nature and conduct are scientific questions, concerning the laws governing cause and effect. Most of our questions about persons and their doings are about interpretation: what did he mean by that? What did her words imply? What is signified by the hand of Michelangelo’s David? Those are real questions, which invite disciplined answers. And there are disciplines that attempt to answer them. The law is one such. It involves making reasoned attributions of liability and responsibility, using methods that are not reducible to any explanatory science, and not replaceable by neuroscience, however many advances that science might make. The invention of ‘neurolaw’ is, it seems to me, profoundly dangerous, since it cannot fail to abolish freedom and accountability — not because those things don’t exist, but because they will never crop up in a brain scan.

Suppose a computer is programmed to ‘read’, as we say, a digitally encoded input, which it translates into pixels, causing it to display the picture of a woman on its screen. In order to describe this process we do not need to refer to the woman in the picture. The entire process can be completely described in terms of the hardware that translates digital data into pixels, and the software, or algorithm, which contains the instructions for doing this. There is neither the need nor the right, in this case, to use concepts like those of seeing, thinking, observing, in describing what the computer is doing; nor do we have either the need or the right to describe the thing observed in the picture, as playing any causal role, or any role at all, in the operation of the computer. Of course, we see the woman in the picture. And to us the picture contains information of quite another kind from that encoded in the digitalised instructions for producing it. It conveys information about a woman and how she looks. To describe this kind of information is impossible without describing the content of certain thoughts — thoughts that arise in people when they look at each other face to face.

But how do we move from the one concept of information to the other? How do we explain the emergence of thoughts about something from processes that reside in the transformation of visually encoded data? Cognitive science doesn’t tell us. And computer models of the brain won’t tell us either. They might show how images get encoded in digitalised format and transmitted in that format by neural pathways to the centre where they are ‘interpreted’. But that centre does not in fact interpret – interpreting is a process that we do, in seeing what is there before us. When it comes to the subtle features of the human condition, to the byways of culpability and the secrets of happiness and grief, we need guidance and study if we are to interpret things correctly. That is what the humanities provide, and that is why, when scholars who purport to practise them, add the prefix ‘neuro’ to their studies, we should expect their researches to be nonsense.

Roger Scruton’s The Face of God is out this week from Bloomsbury/Continuum.

Subscribe to The Spectator today for a quality of argument not found in any other publication. Get more Spectator for less – just £12 for 12 issues.

Show comments
  • Ted Schrey Montreal

    This reminds me of the age-old faux problem of the nature-nurture debate, which died of old age, praisegod. I never quite grasped that either. We are physical but overwhelmingly ‘symbolical’at the same time. I did read Libet’s research and agree with the present writer it is based on mistaken assumptions.

  • Shalom Freedman

    This is an article I have been waiting to read for a long time. It confirms intuitions I have had about the exaggerated neurosciencing of everything. I have particulary paid attention to the total absurdity of neuroscientific efforts at literary criticism.
    Scruton is simply one of the most sane insightful and persuasive thinkers working today.

  • kevin

    This is a brilliant essay, and provides a forceful and long-overdue critique of the rubbish coming from certain parts of academia. But it was only after reading that I noticed the author’s name. How depressing. Can’t we find someone else to say these things, someone who isn’t reactionary polemicist-for-hire twit? Please?

  • blindboy

    I was talking to my junior science class about alchemy today and reading this I suspect the alchemists, feeling just as threatened by progress as the current generation of philosophers probably constructed equally laborious denials of the increasingly obvious.
    The knowledge that particular neurochemicals are involved in forming bonds with other people seems quite significant to me. There are numerous disorders in which individuals have difficulty forming relationships. It is quite possible that these very real disorders could become treatable by what is dismissed here as pseudoscience.
    I am not aware of any serious scientist, neuro or other, who claims to fully explain consciousness but we are making progress. Consider the body map. Surely any full explanation of consciousness involves an explanation of our awareness of our body and what separates self from other. Consider then that if the region of the brain that maps the body is damaged we can lose the perception that the body part corresponding to that part belongs to our self.
    There is a history in medicine of apparently sane and otherwise normal people asking for limbs to be amputated because they are not theirs. Without neuroscience to reveal that this condition arises from a physical problem they would be considered mentally ill.

  • Peter Clarke

    I enjoyed this article and agree with it’s main arguments. However, as a neuroscientist myself, I would like to mention that most of us do not support the neurohype and extreme eliminative reductionism that Roger Scruton rightly attacks.

    As for Kevin’s question ” Can’t we find someone else to say these things?”, Scruton’s article already mentions Max Bennett and Peter Hacker. Other authors who address the brain-mind interface with cautious wisdom include Malcolm Jeeves, Raymond Tallis, David J. Linden and Warren S. Brown.

  • anthony steyning

    Mr Scrutin can do a lot better. If neuroscience is babble than this article is blah,blah,blah! It tells us nothing, except that he is a very, very conventional chap who never forgets to tip his hat as he passes a church, even if he won’t rush in. On the subject of neuroscience I’d rather listen to Eric Kandel, and yes it’s all about mechanics but it helps along deep appreciation of our brain for knowing how to fly. Of course my pet zebra would have a ‘soul’, too, if he were a little smarter, eventually taking responsibility for its acid literary critiques… Next time do something really wild, Sir, by looking forward a tad. This takes courage and those neurons haven’t been identifed yet either. Probably because they’re glued to the back of the ‘soul’, right?

  • Kenneth Nowell

    I would like to support Peter Clarke’s comment. Let’s not essentialize any discipline as “standing for” some fixed opinion. Think of all the poor working scientists who are NOT blowhard buffoons like Dawkins! None of us want the high profile fruitcakes out there defining our own serious professions.

  • Jim Barrass

    Seems to me that the things we find in neuro science is in essence no less than literature or psychology in that they all provide maps of varying resolution of what we call consciousness but none of them but maps are what they remain. To my mind, literature is the best map yet.

  • Arnold Trehub

    Roger Scruton loses touch with serious neuroscience when he suggests that it demands that “Our own way of understanding ourselves must therefore be replaced by neuroscience, which rejects the whole enterprise of a specifically ‘humane’ understanding of the human condition.”

    As a cognitive neuroscientist, my goal — and, I think, the goal of most investigators in this enterprise — is to explain how biology makes a “humane” understanding of the human condition possible. The very pursuit of this understanding is part of the wonder of the human condition.

  • Tom Hartley

    “They might show how images get encoded in digitalised format and transmitted in that format by neural pathways to the centre where they are ‘interpreted’. But that centre does not in fact interpret – interpreting is a process that we do, in seeing what is there before us.”

    What is the “we” and “us” in the last sentence, but an inaccessible homunculus. Cognitive neuroscientists are in fact trying to understand e.g. how images are interpreted WITHOUT a homunculus; only relying on brain mechanisms and behaviour which we can measure; our ideas and computer models however flawed are evidence-based and testable, unlike many of the assertions in the article.

  • Johnny G Ray

    Peter Clarke: I don’t dispute your assertion that most neuroscientists don’t support the irrational reductionism that Scruton criticizes (I don’t have any direct knowledge to base an opinion on) but almost all of the articles I read on the internet by neuroscientists do take this position. (There are exceptions). It could be marketing, or it could be that the sites that publish articles for non-specialists like to print the most provocative positions.
    The responses criticizing Scruton’s intelligent critique because he is a Conservative are simply childish. I have repeatedly found Roger Scruton to be one of those rare thinkers (like the late philosopher Robert Nozick) who cuts right to the heart of philosophical questions, and delivers a well-thought and insightful analysis of any subject he touches, frequently raising fundamental questions that experts in the field haven’t thought to address. Dismissing rational points about any issue because of the author’s non-relevant political positions on other subjects is the mark of an ideologue.

  • Tedd

    Perhaps I read the article in too generous a frame of mind. (It was an interesting counterpoint to Jeff Hawkins’ “On Intelligence,” which I’m currently reading.) But I didn’t get the impression Scruton was slagging neuroscience as a field, rather simply defining its boundaries (in his view).

    Kevin’s comment made me laugh out loud, though. Could there be a better example of unwillingness to re-examine one’s assumptions?

  • pkbrando

    Apparently Scruton minored in straw horses.

  • Kai Maristed

    Wow. This needed to be said and will go far. Just read (in Istanbul during a conference of young entrepreneurs) in between bouts of Kahneman’s absorbing new book. Thank you for reclaiming true frontiers.

  • Peter Penguin

    An extensive and trenchant elaboration of the arguments given by Scruton may be consulted here:

  • James Smith

    This article is incoherent and un-credible. While there are many levels on which we can look at, for instance, human decision-making and therefore personal responsibility, such processes are clearly carried out in the brain, according to neural/physical/chemical/electrical processes. While these processes are complex, they are a result of the physics of the brain, unless one believes in such myths as a religious soul or our time-honored homunculus.

    Such a realization does NOT diminish us as human beings. It gives us the possibility of deepening our understanding of how being human works. Scruton is babbling here. Incoherently.

  • Peter Clarke

    James Smith, I think you have misunderstood Scruton’s article. He accepts, like you (and me) that, to quote your words: “human decision-making and therefore personal responsibility, such processes are clearly carried out in the brain, according to neural/physical/chemical/electrical processes.” and he makes his acceptance of this clear in his first paragraph. He is not attacking these fundamental claims of neuroscience, but the derivative claims of a few serious philosophers (notably Patricia Churchland) and other less serious neurohypers who would have us replace aesthetics, criticism, musicology and law by neuro-aesthetics, neuro-criticism, neuro-musicology and neuro-law.

  • Bjorn Merker

    As another neuroscientist, I second Peter Clarke’s judicious comments.
    Personally, I am embarrassed to hear some colleagues seriously argue that the fact that there is a seamless causal neural path to a given behavior absolves you of moral responsibility for it. Such a stance betrays an incredibly shallow understanding of what moral responsibility is all about, and is a newfangled version of the old excuse “the devil made me do it”. In brief, it is the fact that that causal path ran its course in YOUR particular brain, with your particular personal history, that makes you the AUTHOR of that behavior, and thus morally responsible for it.

  • Myron Muller

    Yes, I am tired of people ruining the beauty and mystery of the stars by trying to tell me some hokum about supermassive, distant bodies of incandescent plasma. These “scientists” should be ashamed.

  • Egypt Steve

    The question of whether a fully-material brain which does what it does on the basis of physical laws is capable of “moral responsibility” is a red herring.

    What matters is: if society to treats individuals *as if* they are morally-responsible agents, if it punishes or shames them for doing what society holds to be “wrong,” or rewarding them for doing what society holds to be “right,” it creates a network of stimuli to which (many) brains respond. This would be necessary even if “morality” could be proven to be a social delusion and/or if the rhetoric of “morality” was abandoned.

  • Mark Kennedy

    Neuroscientists are hardly the first enthusiasts to advance hyperbolic claims for ground-breaking discoveries, and limits sufficient to take the edge off their enthusiasm will crop up soon enough. As Mr. Clarke has pointed out, Raymond Tallis and others have already alerted us to the equivocations that an uncritical use of terms like ‘seeing,’ ‘thinking,’ ‘observing,’ and ‘information’ inevitably lead. Mr. Scruton’s article would have been more interesting if, instead of reprising the same objections to neuroscience’s philosophical pretensions that will have occurred to intelligent readers anyway, he’d made some attempt to show how the challenging findings of neuroscience can be incorporated into the vocabulary of consciousness. What are thoughts and feelings, precisely, and what is their relation to brain activity? That the latter can’t simply be masqueraded as the former is no more newsworthy than the ‘insight’ that brains aren’t computers (even Jerry Fodor doesn’t think this). So… now, what?

    The article cites the ‘process of interpreting’ and the act of analysing Bach’s Art of Fugue; it mentions King Lear and Michelangelo’s David. Everything on the list accords satisfyingly (if predictably) with our intuition and experience. But if we’re unable to investigate this interpretive process, or show how such analysis gets performed, doesn’t this suggest that the vocabulary of intentions, decisions and aesthetic judgments may be oscillating between meanings just as equivocally as that of the neuroscientists? After all, we understand what ‘process’ means when it comes to describing chemical reactions. What can the same word possibly mean, even analogically, when it purports to represent “arguing” with people and “understanding their reasons, aspirations and plans?”

  • Ernest Jones

    The whole is, at this point at least, greater than the sum of its parts, and this debate is about finding out why. I find neuroscience fascinating and enlightening, but I can enjoy my music, art and literature without wondering why as I enjoy them.

  • SocraticGadfly

    The free will, or volitionism, or whatever term you want, vs. determinism debate is archaic, and, not because of neuroscience. To riff on Dan Dennett, though he refused to draw this logical conclusion, and on Daniel Wegner, who did: “If there’s no Cartesian meaner, there’s no Cartesian free willer.”

  • MJA

    Another science idea: Perhaps science could use their super collider at CERN to slam two brains together and find the single micro particle of truth that unites us all, the foundation of the infinite Universe, the absolute so many are still searching for. Surely truth exists. But rather than calling it the God particle if found, I would call it just is. =