Skip to Content

Future of healthcare

Talking cures

Speech-recognition technology will transform medicine, says Victoria Lambert

27 April 2019

9:00 AM

27 April 2019

9:00 AM

While much is talked about the power of robotics or software systems to revolutionise healthcare, ideas around speech recognition have for far too long focused solely on its use for dictation. Now experts are testing the same technology that gives us Alexa and Siri to see what could be achieved in the GP surgery, operating theatres and with dementia patients. According to Cathal McGloin, CEO of the AI company ServisBOT: ‘While it’s early days for widespread implementations of voice assistants in healthcare, the number and range of use cases is growing.’

McGloin says we can expect to see voice activation used everywhere from operating theatres to clinical trials and home healthcare. ‘You could ask: “Alexa, what are the A&E waiting times in hospitals within a ten-mile range?” The voice assistant can then advise on the name and location of the hospital that has the shortest.’

Voice technology can enhance the day-to-day wellbeing of both clinician and patient, explains Dr Simon Wallace of Nuance Communications. His company produces the medical speech recognition technology Dragon Medical One, which has been rolled out in multiple NHS Trusts. ‘Close to 50 per cent of clinicians’ time is spent on documentation processes, only 13 per cent with patients,’ he says. ‘The scale of clinical documentation is exacerbating burnout rates. We need to take care of clinicians so they can take care of patients. Embracing artificial intelligence and cloud-based speech recognition systems speeds up clinical correspondence and frees up time for care.’

What does this mean? An end to handwritten ward notes typed up by secretaries, and a faster referral process. Dr Wallace cites Homerton University Hospital, where Nuance’s medical speech recognition technology has helped reduce the turnaround time on clinic letters from 17 to two days, while saving more than £150,000 a year on transcription.

Nathan Baranowski, founder and managing director of tech company OJO Solutions, agrees that adding AI to the mix opens up possibilities. ‘When you start to link voice tech with AI you get natural language processing,’ he says. His company is exploring how technology can support people in difficult situations where they may not want or be able to talk to another human.

Baranowski gives the example of someone with a brain injury or dementia. ‘They may need extra support; a system which can recognise when they are silent for too long, as well as what they say.’ He describes a scenario where voice-activated technology is installed in a kitchen. ‘When the patient walks in, that triggers the technology. If the patient says they want to make a cup of tea, a series of prompts could then walk them through the process. If they stay silent too long, it would ask: “What do you want to do?”’

Baranowski says that voice-responsive tech can also be developed to understand stress. ‘It can determine speed and even tone of speech, which would be helpful for clinicians with patients with mental health concerns or multiple sclerosis. The technology can even pick up patients who are having a bad day.’

It’s that adjustment from literal interpretations of speech to tone which will be revolutionary, says Will Williams, machine learning engineer at Speechmatics, a Cambridge-based automatic speech recognition technology company. ‘Vocal Profile Analysis (VPA) already has a decent grasp of what it is we’re saying,’ he adds. ‘However, in the next few years, the likes of Alexa will evolve and understand how we’re saying it. Sentiment analysis in VPA will enable them to look for speech patterns that are indicative of certain illnesses, such as early onset dementia or mental health issues.’

Surgeons will benefit too. Neil Fluester, product director of EMEA at Polycom, a communications specialist, says: ‘In a healthcare environment, it is critical that equipment is kept sterilised in order to minimise the risk of infection — including doctors’ and nurses’ hands. Voice-activated technology could enable surgeons to control cameras and surgical team actions without the need to touch anything. State-of-the-art headsets with noise-cancelling features would provide surgeons with the focus and concentration that they need.’

Aymeric Flaisler, senior data scientist at agetech startup Birdie, says ‘passive listening’ microphone-based technology in health tech could help to prevent illness: ‘We’re pioneering technology that creates a “connected home”.’ Part of this is thanks to voice technology created by a company called Ally which can monitor any issues detected via sound, such as exacerbations in breathing. ‘It can also flag falls, intrusions or unusual behaviour patterns,’ says Flaisler. ‘It uses specific sound-recognition algorithms to detect both emergencies and changes to health and wellbeing by monitoring indicators of panic, sleep patterns, low mood and respiratory symptoms like coughing.’

Taking this one step further, says Nathan Baranowski, means better service at every level of healthcare. ‘We already Google our symptoms. We ought to be able to do a lot more. Why can’t I have a conversation with Siri if I’m worried my child has chickenpox?’

As with any new technology there are limitations. Cathal McGloin warns: ‘With so many nuances in people’s speech as well as the challenge of background noise, there is still work to be done to make sure voice assistants understand a request and don’t give the wrong advice, which could have serious consequences.’

Dr Wallace warns of the need for vigilance when it comes to privacy: ‘If we do not have the healthcare professional and public’s confidence, the huge opportunities of speech technology may take longer to be realised, so security and privacy are taken very seriously.’

Show comments