We have watched too many-movies. To our cinema-flamed imaginations the robots of the future are silken-voiced and pliant like Scarlett Johansson in Her, or wisecracking humanoids like Star Wars’ C-3PO, or-otherwise the murderous-Transformer toys of so many Arnold Schwarzenegger films. What they are not, not in our imaginations and not on screen, are exceptionally-clever algorithms.
They may lack Scarlett’s fembot charm, but such algorithms will transform the way we fight wars, combat extremism, respond to acts of terrorism and natural-disasters, how we manage our homes, protect our banks and manage surveillance and espionage.
Professor Nick Jennings, vice-provost of research at Imperial College London, has devoted his working life to artificial intelligence (AI), autonomous-computing and cybersecurity. He is the government’s former chief scientific advisor on national security, Regius professor of computer science at Southampton University, and has many letters after his name: CB, FREng, FIEEE, FBCS, FIET. The CB — Companion of the Order of Bath — is the most recent, given in the Queen’s New Year-Honours for services to national security science. But the letters he is most interested in are AI.
You can turn to Isaac Asimov or computing manuals for long-winded explanations, but Jennings, below, puts it-simply: ‘It’s about making machines do smart things.’ He is sceptical about end-of-days predictions. ‘There’s been a lot in the press recently about AI-taking over humanity and wiping us all out. That’s the kind of thing we see in the films. My take on AI is not that. I see AI very much as complementary to human expertise and endeavour — working with smarter machines which are able to shoulder the load and engage with us in a more useful way; in systems where lots of different humans and lots of different smart machines come together to do their stuff, then disband again. I call those human-agent collectives.’
For the last five years, he has been working on the Orchid-Project, a research programme that teams computer science-academics with engineering, logistics and-robotics firms. The project has dealt mainly with natural disasters, including the earthquakes in Haiti in 2010 and Nepal last year. It also considered the missing Malaysian Airlines flight MH370 and the failings of technology to locate the wreckage.

The systems Jennings and his team work on fuse information from a vast number of origins: crowd-sourced from social media, data about the environment, maps, electricity grids, water sources, transport routes. Such quantities of data — particularly the many-thousands of social-media messages sent in the aftermath of a disaster are impossible for a human team to analyse quickly. A smart algorithm can do it very quickly.
To give an example: a building has collapsed, trapping people under the rubble. The survivors send texts and tweets to say they are in the basement, some hurt, some unharmed, but with no way out. An effective emergency response system would not only pick up those tweets but dispatch emergency workers to the exact spot using GPS data, along with Unmanned Aerial Vehicles (drones). These would relay back to a base or field hospital-information and images about casualties and whether diggers, ambulances,-doctors were required.
Think of the awful Orlando nightclub shooting in June and the desperate texts sent by those trapped in the bathrooms. Better systems will in future be able to tell police exactly where hostages are being held.
Will the drone ever do away with the need for security agents in the field? ‘The days when you don’t have people in the street interacting, finding out what’s going on, and having conversations are quite some-distance away,’ says Jennings. Writers of James Bond and John le Carré adaptations may breathe easily.
And what about that other filmic chestnut: the robot that takes on a life (and violent mission) of its own? ‘The mistake that people who don’t really understand the technology make,’ he says, ‘is that you can build some really clever smart algorithms that are really good at a specific task.’ He cites the recent success of a Google computer that beat the Korean grandmaster Go champion — ‘a step on from the chess program that beat Kasparov in the 1990s’.
‘People mistake that very good, very narrow expertise and generalise it. But general intelligence is exceedingly difficult to do — and we don’t know how to do it.’
Likewise, empathy. ‘There are some things that computers aren’t good at. Social empathy is one of them. You can fake those things – so you see robots that try to have particular facial expressions, but in no sense does that robot have empathy.’
Flair and creativity are also difficult to fake.-Engineers can make a robot that can neatly chop-carrots, but not one that could whip up a boeuf en daube.
What is essential for the future of A.I. is a cohort of very intelligent humans. ‘We simply need more,’ says Jennings. ‘There is a real shortage of trained-computer scientists, data scientists, people who do machine learning.
‘It’s not the fault of the universities. The problem starts before then. Schools have to start making computer science interesting. My kids have just been through GCSEs. The computer science at that level is just dire. Boring as sin. You can see why it puts them off.’
What we forget, seduced as we are by the polished helmets of science fiction, is that behind every-gleaming robot is a highly educated computer scientist with a great many letters after his or her name.
Technical director Ted Varley joined Steeper in 2009 and has led development of the award-winning Bebionic Small. He was nominated for Young Yorkshire Company Director of the Year and is a member of the External Advisory Board for the Leeds Mechatronics & Robotics Facility.
Comments