Blog - News
Jan 26, 2017 | Read time 1 min

Detecting the pattern of speech

By now it is pretty obvious that speech recognition is taking over the world, and so long as it doesn’t go all Hal 9000 on us, then the future looks very interactive. The promise of a world where light, TVs and coffee machines can be activated without touch has already been realised.

But this is a (relatively) simple process of crosschecking what the device thinks it has heard and comparing it to a known list of commands (I have completely oversimplified this – the time taken to reach this technical epoch of voice control is some indication as to the complexity of the process).

lightbulb-technology-illustration
We are now seeing more companies working on tech

Banks are now using your voice as your password, emotion engines can detect your emotion when calling a call centre to give the operative a heads-up if things are about to go south and suggest actions they can take to resolve the issue.

The latest breakthrough comes from Canary Speech – a US start-up – which has developed a way of analysing phone conversations for several neurological diseases, ranging from Parkinsons to Dementia.

A pinch of reality may be required though. This is early stages for the start up. Don’t expect GPs to be replaced anytime soon by a microphone but there is no reason not to think that between machine learning and voice recognition we couldn’t start to see chatbots being used for front line GP care – culminating in the inevitable prescription of Ibuprofen, a staple of the British doctor. The tech that Canary are investigating is not yet fully mature, they will need to rely heavily on using large data sets over a sensible period of time to teach their machine how to effectively identify a problem.

How this technology will be rolled out is a big issue to consider. At the moment, most calls to call centres are recorded for monitoring and quality purposes – that’s monitoring of the call centre operatives, not the caller.  I’m not sure there are many people who would appreciate being told by Vodafone that they have identified them to have signs of dementia. That’s all yet to be ironed out as we get to grips with more of our data being analysed.

From Speechmatics’ point of view the more research that goes into using neural networks and machine learning the better. In-house we are getting better and better at finding more efficient ways of changing speech to text. We are able to do now on a phone – what five years ago – required banks of graphics processors. This has come about because the collective knowledge that computer science has developed in the last 5 years has advanced so much. Research breeds research. The more uses for speech recognition the more ways it can be streamlined.

Luke Berry, Speechmatics