Kit Wilson

Who’s afraid of organoid intelligence?

Scientists have noticed that computers are quite slow, and that the human brain is quite fast. Here comes an ethical nightmare.

For fans of bioethical nightmares, it’s been a real stonker of a month. First, we had the suggestion that we use comatose women’s wombs to house surrogate pregnancies. Now, it appears we might have a snazzy idea for what to do with their brains, too: to turn them into hyper-efficient biological computers.

Lately, you see, techies have been worrying about the natural, physical limits of conventional, silicon-based computing. Recent developments in ‘machine learning’, in particular, have required exponentially greater amounts of energy – and corporations are concerned that further technological progress will soon become environmentally unsustainable. Thankfully, in a paper published this week, a team of American scientists pointed out something rather nifty: that the walnut-shaped, spongy computer in your skull doesn’t appear to be bound by anything like the same limitations – and that it might, therefore, provide us with something of a solution.

The human brain, the paper explains, is slower than machines at performing basic tasks (like mathematical sums), but much, much better at processing complex problems that involve limited, or ambiguous, data. Humans learn, that is, how to make smart decisions quickly, even when we only have small fragments of information to go on, in a way that computers simply can’t. For anything more sophisticated than arithmetic, sponge beats silicon by a mile.

The key, apparently, is ‘efficiency’. The paper gives the example of AlphaGo, the AI system that, back in 2015, famously ‘taught itself’ how to beat human players at Go. At the time, it caused a scandal – but the truth, the scientists write, is that AlphaGo was wildly inefficient. It was trained on the digitised data of 160,000 games, a level of ‘experience’ that would take a human playing five hours a day for 175 years to match. The ‘learning’ process also gobbled up 40,000,000,000 joules of energy – roughly the same amount a human adult needs to survive for a whole decade. If all the data centres in the U.S. were powered by brains rather than computers, the scientists write, they could run on something like 0.001 per cent of their current energy use.

South Korean professional Go player Lee Se-Dol plays Google’s artificial intelligence programme (Credit: Getty Images)

The assumption, of course, is that all of this must have a purely physical explanation – that what sets our brains apart is some remarkable structural intricacy that we can’t currently reproduce in silicon. Rather than waste time trying to figure out how to do it, we can just skip a step and ‘leverage the extraordinary biological processing power of the brain’, instead.

To be clear, nobody is proposing, yet, that we repurpose actual brains. The idea, at this point, is simply to use stem cells to grow minuscule ‘organoids’ – essentially tiny bits of brain tissue that replicate, at the microscopic level, the unique structural complexity we find in grey matter. String enough of these together, and you end up with a lattice of super-efficient, miniature biological processors – or what the scientists romantically call ‘intelligence-in-a-dish’.

As you can imagine, the proposal has already stirred up what is known, in technical jargon, as an ethical shitstorm. 

The first thing critics seem to have latched on to is the question of whether or not such biocomputers would be in any sense ‘conscious’ – and what moral obligations, therefore, we might have towards them.

I, myself, am pretty doubtful that a clump of organoids would ever yield sentience of any genuine kind. Nobody questions that there’s some kind of link between the brain and consciousness. But to my mind, there are just too many arguments against a purely physical account of consciousness to think that simply copying and pasting chunks of grey matter would – lo and behold! – magic into existence new globules of sentience, too. First of all, no one has ever managed to explain how you get from the physical facts of neurons firing and chemicals fizzing to full-on, technicolour, first-person experience. Secondly, nobody knows how trillions of tiny physical processes give rise to our capacity to formulate, and think in terms of, ‘human-scale’ abstract concepts. Third, we have no idea how bundles of ‘if-x-then-y’ algorithms end up yielding the intentionality of human reason – the capacity of the human mind to think about things; to direct itself, seemingly freely, towards certain ideas or goals.

That said, I must admit that, while I’m 99 per cent certain that purely digital AI like, say, ChatGPT will never, ever – no matter how sophisticated it gets – become conscious, when it comes to replicating actual biological matter from the human brain, I do get a tiny bit more… queasy. There are, after all, some not-wholly-reductionistic theories that suggest that our neurons, for instance, might act in peculiar ways at the quantum level to help produce elements of conscious experience – including the speculation, offered by the mathematician Roger Penrose, that ‘microtubules’, tiny structures inside our neurons, ‘lock’ quantum fluctuations in a stable state for long enough for something like coherent subjective experience to appear. Even if that were the case, though, it would seem unlikely to me that it was this alone – multiplied by however many trillions of neurons we have in our brains – that creates consciousness. After all, we encounter subjective experience not as the aggregate of lots of microscopic, discrete chunks of proto-conscious experience, but as a unified whole. As Immanuel Kant pointed out, the human mind can only make sense of reality by already ‘bringing to it’ fully-formed concepts – time, space, numbers, objects. It’s hard to know what 3 per cent of the ‘concept of time’, say, would ‘feel like’ – we either have the full thing ready to go in our conceptual armoury, or we don’t have it at all.

No one has ever managed to explain how you get from the physical facts of neurons firing and chemicals fizzing to full-on, technicolour, first-person experience

The ethical issues don’t just end with the matter of consciousness, of course. It’s probably a lost cause, today, to appeal to any kind of notion of the sacred, given that our metaphysical picture of reality no longer really knows how to handle awkward concepts like souls or spirits or inalienable rights. But I can’t help wondering whether we ought just to pause and think, for a moment, whether toying with our brains like big blobs of play-dough is really such a good idea – or even, whether certain things are simply, from the outset, fundamentally off-limits. 

Still, even if you resort only to consequentialist arguments, we’re clearly dancing on a very skinny ridge here, with slippery slopes on both sides. Why wouldn’t this lead, in the long run, to us using actual human brains for purely instrumental purposes? Mightn’t we end up requiring the emerging class of ‘useless’ people in society to perform surrogate calculations with their otherwise-wasted grey matter? Do we really think that individuals, given the opportunity to hook up their own brains to silicon chips, or perhaps even to daisy-chained organoid supercomputers, will use their newfound powers for benevolent purposes? 

Which yields the obvious question: why? Why, of all the things scientists could be doing right now, are we so obsessed with these strange, quasi-dystopian projects?

The paper does, of course, offer a few fluffy, feel-good answers beyond the utilitarian goal of saving energy. This kind of research, it states, might help us figure out more about the brain itself, thus hopefully giving us new ways of helping people with, for instance, dementia. Mm.

It’s hard not to think, though, that all of this is just yet another example of our curious modern-day ideology of unrestrained, restless scientific ‘progress’ – that if something can be done, it must be done; that it would be a tragedy to stop short of finding out, once and for all, the absolute full extent of our powers over the universe, whatever the moral cost. This, it seems to me, has become a kind of surrogate source of meaning in the absence of religion. It’s a sentiment expressed by the (admittedly morally disgraced) transhumanist Hugo de Garis: ‘I think it would be a cosmic tragedy if humanity freezes evolution at the puny human level… The prospect of building godlike creatures fills me with a sense of religious awe that goes to the very depth of my soul and motivates me powerfully to continue, despite the possible horrible negative consequences.’

There’s one thing, though, that makes this particular gamble so odd: creating organoid intelligence (OI) will lead, no doubt, to all sorts of weird technological developments, including a great many unintended and unpleasant ones – but it’s already fairly obvious, from the outset, that the one thing the project won’t do is achieve its actual, stated aims.

This is just yet another example of our curious modern-day ideology of unrestrained, restless scientific ‘progress’ – that if something can be done, it must be done

The reason the brain outperforms computers at complex tasks is simple: its unique ability to make informed, smart, intentional decisions at the level of conceptual thought. We humans are capable, that is, of ‘abductive reasoning’ – making intuitive, creative leaps based on prior knowledge; actually toying with a hypothesis consciously in our minds. This, as artificial intelligence researchers freely admit, is fundamentally different from the kinds of ‘reasoning’ we’ve so far managed to program in algorithmic form, and reproducing abductive reasoning has long been considered a kind of technological holy grail. Evidently, the OI scientists believe the answer must come down to the physical properties of grey matter itself – that something about the way our brains are wired allows for patterns of thought we currently don’t know how to formulate algorithmically.

This seems to me wildly naive. As the computer scientist and writer Erik J. Larson has argued, the reason we still, after decades of research, don’t have ‘the slightest clue’ how you would produce abductive reasoning in a machine is because it simply appears to be something fundamentally non-computational. To me, it seems obvious that the reason the ‘brain’ is so good at making quick decisions isn’t because of its physical structure, but because there’s a conscious, reasoning, intentional, unified self – closely associated with the brain – that can actually think about concepts as things in and of themselves, not just compute gigantic lists of mathematical puzzles.

That’s the kind of thing ‘intelligence-in-a-dish’ could never tell you – which makes you wonder whether it’s all really worth the moral risk. Still, I guess it’ll help with the energy bills. 

Comments