In 2003, the UK government banned the use of hand-held mobile phones and other hand-held devices while driving a vehicle. In March 2022, they went a step further, banning drivers from taking photos or videos, and even scrolling through playlists or playing games. To use your phone, you need to be stationary with the keys removed from the ignition. Either that or an accurate speech-to-text system.
In our increasingly plugged-in society, dangerous distractions are available at your fingertips. 1 in every 4 car accidents in the US is caused by texting and driving, so, to avoid increased accidents, manufacturers are adapting to the times by using the latest speech-to-text technology in their vehicles.Texting and Driving Accident Statistics Article
Accurate voice recognition is necessary in any modern vehicle. A competent system should enable the driver to change a song, find a route/destination, set a service appointment, send emails/messages, and even set calendar appointments. In the future, we might be using our voice to direct self-driving cars, for example, by simply saying “drive me to London Paddington”.
Car environments are notoriously noisy. The level of noisiness can vary drastically depending on a multitude of driving conditions, such as the vehicle speed, road surface, engine revs as well as atmospheric conditions such as wind and rain. Such noisy environments are a perfect test for our aim to understand every voice in every situation. That’s why we set out to learn how our Autonomous Speech Recognition (ASR) fares while being used in vehicles.
To do so, we used the Audio-Visual Speech Recognition in a Car (AVICAR) corpus which includes different types of speech from 86 English speakers in-car environments with varying noise conditions: engine idling (IDL), 35mph with the windows up (35U), 35mph with the windows down (35D), 55mph with the windows up (55U), 55mph with the windows down (55D).AVICAR Project: Audio-Visual Speech Recognition in a Car
Using these five conditions, we tested accuracy levels based on weighted average word error rate (WER) against our competitors and found that Speechmatics’ ASR is incredibly more accurate in vehicles compared to some of the industry’s biggest names, with a difference in errors from 60% to 22% from the least accurate to the second most accurate provider.
This pattern is consistent across the five noise conditions: Speechmatics is the most accurate in every single condition. Critically, the data also shows that the absolute difference in accuracy between Speechmatics and the other closest provider doubles in conditions with the highest level of noise (the highest speed with windows down, 55D: 8% difference in accuracy) compared to the one with the lowest level of noise (the lowest speed with windows up, 35U: 4% difference in accuracy). This means that as the level of noise increases, the largest is the difference in accuracy between Speechmatics and competitors.
Human speech is extremely complex and varied – there isn’t one single way to talk. Vehicular voice recognition is particularly challenging because it combines voice differences in humans with the loud, disruptive nature of cars or any other kind of vehicle. So, voice recognition that is actually useful in the real world needs to prioritize accuracy in a noisy environment more than anything else.
The motor industry is a huge use case for speech-to-text technology and manufacturers are taking notice. For example, Microsoft has partnered with BMW to create a vehicle software based on Microsoft’s Azure, which aims to create a more natural-sounding system that accounts for non-perfect speech. Amazon is going a step further, allowing manufacturers to build digital assistants using Alexa. This means that their tech can direct drivers to Amazon stores, even ordering groceries while driving. So, while speech-to-text technology certainly has safety benefits, the commercial advantages are massive.Microsoft Partners with BMWAmazon in-car Voice Assistants
At Speechmatics, our sole focus is to understand every voice, whether that is through the 7100+ languages on Earth or simply being able to find a fuel station near you. Accuracy matters and, in this case, it directly impacts the quality and safety of hands-free driving.
Recognizing this, vehicles’ AI is becoming more competent with each passing year – most modern cars have some form of cruise control and voice assistant. Consumers are beginning to expect this as the bare minimum - 73% of drivers anticipate using built-in voice assistants for one or more purposes by 2022. Furthermore, Automotive World says that by 2028, 90% of new vehicles worldwide will have in-care voice tech embedded.Capgemini Voice on the Go ReportThe Road Ahead for Voice Controlled Cars
It’s clear then, that voice recognition has a mobile future. Manufacturers must continue to innovate.
Benedetta Cevoli, Data Science Engineer, Speechmatics