Eyebrows have been raised, to put it mildly, at the news that the Ministry of Defence is using an artificial intelligence programme to assess submissions to the current review of Britain’s armed forces. The Strategic Defence Review was launched in July, and the following month a call for evidence was issued, inviting ‘serving military, veterans, MPs of all parties, industry, and academia’ to submit responses to a series of propositions through an online portal. The closing date was 30 September, and the responses would ‘help’ the review team led by former Nato secretary general Lord Robertson of Port Ellen.
Defence is a policy area rich in verbiage and jargon, and both military and civilian experts are adept at shrouding even simple meaning in the most impenetrable of language
It transpires that the government agreed a contract over the summer with US data analytics software company Palantir Technologies to design a programme which will analyse the thousands of submissions received, looking for key words and themes. It will then produce a summary of the data to inform the review secretariat as it begins to draft the report.
Unease at this innovation has been expressed on two principal counts: content and security. Each is important, and each requires at least reassurance and perhaps mitigation from the government, although it would seem unlikely that the process will be altered significantly at this stage. The review is expected to be drafted and finalised over the next few months and delivered to the Ministry of Defence around March next year. It is anticipated that the government will then respond in the summer.
In terms of content, there is anxiety about the level of sophistication the software will bring to its analytical work. Defence is a policy area rich in verbiage and jargon, and both military and civilian experts are adept at shrouding even simple meaning in the most impenetrable of language. How has the software been developed and tested? How rigorously has it been screened for hidden bias? How deeply will it be able to interrogate what is really being said? One industry figure fretted that it would produce a ‘glorified word cloud’, while others have suggested that some contributors have attempted to game the system by emphasising words and phrases which they think will attract the attention of the AI filters.
The government’s response to this criticism is that civil servants will exercise oversight of the AI process, but this is scant reassurance. If the task is so huge that it must be performed by software, then a handful of officials will hardly be able to provide significant safeguards. Alternatively, if the oversight process is sufficiently thorough and detailed to guard against any major problems, why bother automating the analysis at all?
The second area of concern is security. Anyone who has dealt with the Ministry of Defence will know that it is a department unusually obsessed with secrecy and protection of information, and the motivation is understandable. It deals with some of the British state’s most sensitive issues like the Trident nuclear deterrent, and is always a prime target for penetration and espionage by our adversaries.
It is reasonable to ask, therefore, how secure the AI process is and what measures have been taken to protect the enormous amount of data it must be collecting and collating. AI systems are relatively fragile and easily hacked, and it is a sufficiently common occurrence that Google created its own dedicated ‘red team’ for anticipating, deterring and resisting hackers. Has the MoD effectively created an enormous Trojan horse to allow sophisticated actors access to an invaluable cache of classified and confidential information?
It is important not to throw up our hands in despair at the use of new and emerging technology. Even the most hardened cynic will grudgingly allow that the potential benefits of artificial intelligence and large language models are vast and could be transformative for the pace and scale of policy-making. The Ministry of Defence has not asked the Microsoft Paperclip to help it write a defence review, and the department should be at the forefront of technological change.
But if the potential benefits are vast, so too are the potential pitfalls. The Strategic Defence Review is intended to ‘determine the roles, capabilities and reforms required by UK Defence to meet the challenges, threats and opportunities of the twenty-first century.’ It should be a once-in-a-generation assessment and reorientation of our defence and security posture, looking decades into the future. The government likes to be seen as making rational and evidence-based policy, which would be fatally undermined if that evidence base was itself flawed.
The Ministry of Defence has much more to do in reassuring interested parties that its AI process is secure and reliable. There will come a day, probably soon, when this kind of approach is standard. But at the moment it is fair to ask if this was the right project to conduct this kind of experiment on. Simply put, the MoD cannot afford to get this wrong.
Comments