Artificial Intelligence (AI) has surged in popularity in recent months. ChatGPT alone has swelled to more than 100 million users in a matter of weeks, capturing the imagination of the world for whom the technology had previously been consigned to the realm of science fiction. Scores of companies, from software businesses to manufacturers, are racing to find fresh ways to build its functionality into their operations.
But amidst the excitement, there is also a worry: are we going too far, too fast? Twitter’s owner Elon Musk warned this week that AI could lead to ‘civilisation destruction’. Regulators, alarmed at this explosion in activity, are scrambling to react. They have a serious dilemma before them: do they push for lax rules that give the nascent AI sector enough breathing space to grow, or do they aim at tough legislation that stops bad AI getting out of hand?
There is an even bigger problem: an apparently low-risk AI being given an anodyne task that proves catastrophic
The EU is hoping to be first out of the gates with its proposed rules, and is seeking to strike a balance between these two poles by differentiating between what it calls ‘limited-risk’ and ‘high-risk’ AI and applying different strictures accordingly. But this differentiation could prove a fruitless, perhaps even dangerous, project.
Academics at the University of Cambridge have voiced concern about the proposed EU rules. They warn that general-purpose AI tools, such as ChatGPT, which are likely to be classed as low-risk, could become high-risk if they are used for ulterior purposes. Stanford fellow Dr Lance Eliot warns of the danger of AI that ‘manipulates the targeted AI that has direct contact with the human…that trusts the AI that they normally deal with’. In other words, using AI to frustrate the intended operation of a chatbot through calculated interactions, with damaging consequences for its human users.
But there is an even bigger problem: an apparently low-risk AI being given an anodyne task that proves catastrophic. Consider this thought experiment: suppose a leading fashion brand is testing out its new AI, and instructs it to come up with a marketing strategy to maximise sales of its latest designer coat. The AI decides a PR stunt is the most effective approach. It sets about scouring social media sites for data to build up a list of 1,000 people it thinks are most likely to hate each other — sworn enemies, members of rival gangs, ex-criminals eyeing a vendetta, members of ideologically opposed protest groups and so on — and invites them to an exclusive launch event on Oxford Street with the promise of claiming a free luxury coat worth hundreds of pounds.
The invitees duly show up to collect their coats, but as they gather on the street, they catch sight of their foes. Soon enough, a full-scale brawl ensues with fisticuffs and black eyes. Shocked passers-by rush to take pictures of the fracas. Before long, a photo — of a crowd of hundreds of identically-dressed people seemingly at war with each other — makes it to the front pages of all the newspapers.
The AI’s plan pays off — the coats sell at record speed — but the brand’s owners are horrified at the sinister marketing strategy it had concocted. They thought, as the regulators had, that it was a low-risk AI being set a low-risk task.
But there could also be subtler cases where things go wrong. Cambridge researchers warn digital personal assistants could be used to ‘promote…ideologies well above others, with the potential to contribute to substantial and potentially harmful shifts in our markets, democracies, and information ecosystems.’
In other words, authoritarian forces might use a large-language model like ChatGPT to subtly promulgate an extremist political ideology, in ways which, on the surface, would go without detection.
All of these rather alarming cases point to the need for AI regulation. But there could be as much risk from having no rules as having ones that don’t work. And the EU doesn’t always have a stellar track record when it comes to effective rule-making.
At a speech celebrating the ratification of the landmark GDPR data privacy rules in 2016, EU commissioner for justice, consumers and gender equality, Věra Jourová, hailed the rules as ‘a big step forward in the digital age [that] will help restore trust in the internet.’
‘Without doubt, with better protection and greater control of their personal data, individuals will feel less afraid to use online services,’ she said. ‘They will be better informed and will understand more about how their data is processed.’
Seven years on, few agree that the EU’s flagship General Data Protection Regulation (GDPR) has achieved these lofty ambitions. Many trust the internet even less than they did before. And GDPR itself has become a byword for the relentless website cookie pop-ups that drive web users up the bend. Doubtless GDPR made vital advances in ensuring private user data would not be abused or hoarded by big corporations, but enforcing the rules has proven cumbersome and bureaucratic.
Regulators in Luxembourg and Ireland have borne the brunt of the legwork, having been forced to handle the entire bloc’s data complaints for tech giants like Facebook and Amazon because they chose to locate there for tax purposes.
The Irish data protection agency has been lumbered with over 1,000 cross-border complaints since 2018. By September last year, it had racked up a backlog of several hundred cases, some of which remain unresolved after almost five years of processing. More than a quarter of cases raised in 2020 were not concluded, and only about half of those in 2021 have reached a conclusion. An enormous 1.6 billion euros (£1.4 billion) in fines have been issued under the rules, but two-thirds of this sum relate to just two cases involving Amazon and WhatsApp. Both companies are now appealing the decisions.
If the EU becomes the first major market to enact AI rules, they will likely set a precedent, with variants of the legislation being adopted worldwide. The bloc needs to consider the rules carefully to avoid another GDPR-style fallout. The stakes could not be higher.
Comments