Last week, 116 experts including Tesla’s Elon Musk and DeepMind’s Mustafa Suleyman called for a ban on autonomous weapons, otherwise known as ‘killer robots’. The experts’ aims are laudable, but they are likely to have about as much effect as King Canute ordering the sea to turn back.
Whether we like it or not, autonomous weapons are coming. States like China, Russia and the US are already developing such systems. They will not want to lose the potential military advantage to the other superpowers.
From a technological perspective, seeking to ban one application of Artificial Intelligence will become increasingly difficult. More and more, A.I. is multi-purpose. Just think of IBM’s flagship program, ‘Watson’. In 2011, it won the US general knowledge gameshow Jeopardy. By 2012, it had been sold to several medical companies to be used in diagnosing patients. Now, essentially the same program is used in a vast range of areas – from law to cooking. The point is that the most advanced A.I. programs can be put to various different uses. This means that cutting off any given industry from A.I. will be harder as time goes by. For instance, a program designed to locate a missing person could equally be used as a missile targeting system to assassinate an enemy.
Though weaponised A.I. has the capacity to do great harm if it is hacked or malfunctions, it also has the ability to avoid much unnecessary suffering. A.I. does not get tired, angry or vengeful in the way that human soldiers do. Robots do not rape, loot or pillage. Instead, robot wars could be fought with impeccable discipline, greatly improved accuracy, and consequently far fewer collateral casualties.
Rather than fighting against it, we should accept that A.I. will be used in weapons, and look to regulate its use. Ethicists, humanitarians, lawyers and computer scientists ought to come together to agree legally binding codes of conduct concerning when and how A.I. should be used. Most nations already subscribe to rules of international humanitarian law governing conduct in warfare. The Geneva Conventions are one particularly successful example of this. The UK Government was partly correct in 2015 to say that we do not need to ban autonomous weapons because “international humanitarian law already provides sufficient regulation for this area”.
However, we should not be complacent. Autonomous weapons will need new rules too. Right now, armies have to make sure that military engagements are “proportionate” to the intended objective, taking into account the number of civilians likely to be killed. This is done on a case-by-case basis. Nowhere will you find a golden ratio which sets out the price of a human life in any given situation. If autonomous weapons are to be bound by these laws too, then we will need to find some way to codify our framework for tackling these ethical dilemmas.
You might react that such a calculation is monstrous, and this responsibility should never be delegated to A.I. But if A.I. is to be used to its full potential then this is exactly the sort of issue it will need to tackle, and not just for autonomous weapons. Sooner or later a self-driving car will need to decide whether to harm a pedestrian or its passenger – the famous ‘trolley problem’. A.I. is at its most helpful when it can make choices, but as part of the bargain we cannot avoid the moral questions it will raise. And that is exactly why we should engage with these questions now through public and reasoned debate.
The danger of simply calling for a ban is not just that it will be ignored and ineffective, but also that we miss the opportunity to instil common values into the development and use of autonomous weapons while they are at an early stage. As the example of climate change shows, it is very difficult to regulate a technology retrospectively.
We don’t face an imminent threat from killer robots marching down our streets. But the best way to prevent this happening is to come together to agree on regulation and shared standards, rather than make counterproductive demands for prohibition.
Jacob Turner is a lawyer and author and is a former lecturer at Oxford University. His latest book, Robot Rules, is released next year
Comments
Comment section temporarily unavailable for maintenance.