Despite warnings that self-driving cars are not yet safe to hit the roads, the government is pushing ahead with making automated vehicles (AVs) legal. But their rush to get driverless cars approved before the technology is ready is irresponsible – and could turn our roads into a giant testing ground. This would bring risks not only to other drivers, but to those using the AVs themselves.
The government has heralded AVs as the start of a transport revolution and the solution to road safety concerns. Indeed, research has suggested that AVs could create 342,000 additional jobs in the UK, bring in £66 billion to the economy by 2040 and improve the mobility of the disabled and the elderly. Although the potential benefits of AVs should not be dismissed, this is all a fantasy unless their technology is sound.
Instead of making driving more convenient and enjoyable, AVs will needlessly endanger lives
Last December, Transport Secretary Mark Harper stated that sophisticated AVs could appear on our roads ‘as early as 2026’. A new Automated Vehicles Bill is currently being examined in the House of Lords. But there is a problem: the theoretical benefits of AVs driving the Bill go far beyond the realms of what is currently technologically possible.
The Bill itself undoubtedly provides some useful legal innovations, such as establishing user liability. It plans to make it compulsory for AVs to have an ‘authorised self-driving entity’ – typically the vehicle’s manufacturer – who is legally responsible for the AV whilst it is ‘driving itself’. This would leave only ‘non-driving’ responsibilities with the user. However, the assumption that seemingly underlies the Bill is that AV technology will soon be road ready and simply needs a legal framework to ensure it can operate safely. In reality, the most pressing safety issues lie with the technology itself.
Currently, instead of making driving more convenient and enjoyable, AVs will create inconvenience and needlessly endanger lives. AVs face multiple problems: their AI systems cannot comprehend the subtleties of driving, they require continual human supervision and are woefully underprepared for cyber threats.
AI’s poor decision-making ability has already led a key developer of self-driving cars in California, Cruise, having its ‘robotaxi’ service banned from operating on public roads after the AVs impeded emergency services – the AI operating the vehicle lacked the judgement of a human driver to let ambulances pass. Another worrying case saw a pedestrian forced into the path of an AV, which proceeded not only to hit the individual but stop on top of them.
Clearly, AVs remain limited in their ability to read driver and pedestrian body language. Until they can cope with the complexities of driving on public roads as safely as humans, which appears some way off, the government should not rush to roll them out.
Even systems where a human driver can resume control of the car, such as Tesla’s autopilot system, pose unnecessary risks. Tellingly, the company is having to recall two million cars sold in the US due to their role in numerous fatal crashes. The system lulled drivers into a false sense of security, reducing their readiness to take the wheel when something went wrong. But even if the driver were ready to take control at all times, this undermines the entire benefit of self-driving cars.
AVs also pose additional cyber security risks to road users, as they are heavily reliant on software to function. This, unsurprisingly, makes them an attractive target for malicious actors. As the House of Commons transport select committee warned, ‘a large cyber-terrorist attack targeting…many self-driving vehicles simultaneously could cause mass casualties’.
Despite this, there appears to be no clear strategy to counter the cyber threats to AVs beyond the woolly principles offered by the National Cyber Security Centre. For example, one principle states that security should be ‘owned, governed and promoted at board level’; another that software should be ‘managed throughout its lifetime’.
These vague guidelines will not effectively deter hackers from tricking AVs into misinterpreting their surroundings, taking full control of AVs remotely and exposing sensitive customer data. Other similarly software-dependent internet-connected technologies, such as smart TVs, have shown that it is not only challenging to identify vulnerabilities in systems but also to ensure these are ‘patched’ with software updates to fix them after an attack. Technology companies often fail to issue software patches and, indeed, according to a survey by security company Tripwire, one in three breaches is caused by unpatched vulnerabilities.
Given the disastrous consequences of any AVs being potentially hacked, such as careering into pedestrians, it is imperative that AV technology is at a level where software updates can be carried out simply and routinely, just as you would with your smartphone. Until cyber threats to AVs can be better mitigated, vehicles that have either a high or low level of automation should not be made legal.
AVs could be ground-breaking technology, with many enticing potential benefits. However, with serious safety concerns left unresolved, letting AVs loose on our roads would be a disaster waiting to happen.
Comments