On Tuesday, the government’s consultation on AI and copyright comes to an end. There doesn’t seem to be much hope that Sir Keir and his tech-dazzled colleagues will pay much attention to it, though: long before it came to an end they made clear that their preferred plan was to change copyright law so that big tech will be able to train their models for private profit on the copyright work of artists, writers and musicians without permission or compensation.
Sir Kazuo Ishiguro and Jeanette Winterson are the latest to raise their voices in opposition – joining a united chorus of the Society of Authors, which also opposes this reverse Robin Hood notion. As Sir Kazuo rightly says, nobody believes for a second that an ‘opt-out’ model, whereby your work can be stolen unless you specifically ask otherwise, will work. I feel pretty confident they will be ignored, as will be all the people whose copyright work is at stake in this argument.
But I don’t want to rehearse the iniquity of suspending copyright so that huge tech companies can make even more money. I wrote on the subject here not long ago. I want to make the larger point that it is of a piece with the drift of most governmental thinking worldwide about AI. Everyone’s desperate for growth, everyone’s imagining that AI will magically cut wage bills and automate processes that used to be slow and expensive, and everyone’s terrified of being left behind, so we are seeing a Dutch auction of principles and standards in the credulous scramble for the imagined riches to be had from being a ‘world-leader’ in AI.
In all cases, the assumption is that any form of regulation, the application of any existing rules or laws, that might impede the tech companies galloping ahead, is going to be poison to the national interest. And a very well funded and very well connected lobby for the people who stand to get very rich indeed from AI is encouraging this view in the corridors of power. JD Vance’s speech at the AI summit in Paris was only the most naked expression of it: ‘we need international regulatory regimes that foster the creation of AI technology,’ he said, ‘rather than strangle it’. Which, when you put it that way, with a giant thumb on the rhetorical scales (it is nicer to foster babies than to strangle them, most people will think), sounds reasonable.
It is anything but. AI undoubtedly has a host of astonishing benefits to offer humanity. It can spot patterns in huge datasets that humans cannot; it has extraordinary potential in understanding protein-folding, in predicting the weather, in medical diagnosis and in epidemiology, in cosmology and resource allocation, and in shudder-making potential military applications. There’s a ton it can do that we should all welcome.
But it also – and these are already-existing use-cases – does great harm and as it advances has the potential to do a great deal more. We know that it is already being used for political manipulation and the dissemination of fake news; that it is creating a crisis of cheating in the education system; that it is a powerful tool in the hands of scammers and fraudsters; that its abilities as a tool of state surveillance are terrifying; that AI technologies are behind ‘deepfake’ pornography used to harass and humiliate women. Its ability to write code or to give a basement-dwelling school dropout the equivalent of an advanced degree in chemistry or virology has all sorts of implications for terrorism that most of us would rather not think about.
That is to say nothing of the more apocalyptic ‘control problem’ which is taken surprisingly seriously by most of the veteran experts on AI who have spoken on the subject. If AI’s development is driven only by a winner-takes-all race for private profit, that is not the best way to ensure that in their haste to be first, the big tech companies won’t accidentally create something which has incentives or interests that don’t align with those of humanity. It sounds alarmist when, to take a famous example, you speculate that an AI might turn us all into paperclips. But those who know the field take versions of that question very seriously indeed.
To cast the idea of regulating this sector of inquiry as pettifogging red tape, as short-sighted Lilliputians binding Gulliver, therefore, is just dangerously and stupidly irresponsible. We regulate anything with the potential to do serious harm. That’s why we have strict rules on gain-of-function research in virology labs; why we tend to keep a close eye on who has access to the precursor chemicals for recreational drugs or explosives; why you can’t buy an AK-47 in Tesco; why, indeed, the same government that wants to let AI companies do whatever they please is at the same time busy trying to limit the availability of unbreakable encryption.
A technology as powerful as this – and one that has the potential to break things irreparably – is exactly the sort of technology that demands close regulation
A technology as powerful as this – and one that has the potential to break things irreparably – is exactly the sort of technology that demands close regulation; regulation that, sure, keeps the playing field level for competition (which asks for international cooperation), but regulation no less. The idea that the profit motive alone will keep us safe is for the birds. Hope for the best, sure: but regulate for the worst.
Writing in these pages not long ago, Simon Hunt recalled that in the early years of DeepMind its two founders differed on what they wanted to do with it. Mustafa Suleyman spoke of using the technology to solve humanity’s problems. Demis Hassabis speculated that DeepMind would be able to answer questions about ‘humanity’s purpose, its destiny and the fate of planet earth’.
That is, obviously, the purest and most ridiculous hooey, but it is an indication of the near-theological attachment that its boosters have to the technology. Money, and blind faith in the transformative potential of a technology we barely understand, are very powerful drivers of behaviour. And it makes me think that unless our leaders are capable of giving their heads a wobble, the profound lesson that AI will give us about humanity is the same one that was given to us by Prometheus, investors in the South Sea Bubble, early modern speculators in tulips, and US mortgage lenders in 2008: which is that herd behaviour, blind faith and foolish greed don’t tend to end well for anybody.
Join Sam Leith for the first instalment of The Book Club Live: An evening with Lady Anne Somerset at Cadogan Hall in March – visit spectator.co.uk/bookclublive to book. Use promo code BCLLAS for a subscriber ticket.
Comments