Blog 28th Sep 2021
Speechmatics attended AI Summit to speak about inclusion in AI

If there was an overriding takeaway from this year’s AI Summit at London’ ExCeL, it was… it’s good to be back. After last year’s hiatus, you could see the pleasure writ large on attendee’s faces as they walked the aisles of Hall S3, meeting new people and learning about the advances made in the past 24 months.

The COVID-19 pandemic was namechecked regularly during presentations, but mostly in a way that showcased how good technology, implemented well, could help get us not only back where we were, but to improve all our lives as we move forward.

Two of the headline topics spoken about at stands and in the arenas were Quantum Computing and building trust in AI. But there was also a great deal of interest around inclusion and diversity, with particular interest paid to bias in AI and what we can do about it.

Two talks on the first day, one from Speechmatics’ Chief Marketing Officer, David Keene and the other from Randeep Sidhu, a product leader in Health Tech, highlighted this ever-relevant, yet often difficult, conversation.

First up, on Wednesday morning, David presented AI Bias, Inclusion and Diversity in Speech Recognition. He spoke in depth about not just the problems that automatic speech recognition faces – namely certain voices being less well served by ASR – but also what Speechmatics is doing to fix it.

With the mission to Understand Every Voice at the heart of everything, examples were given of just where certain sectors of society were failed by speech recognition. Using the Stanford experiment as his primary example, it was clear to see the inequality in performance for voices of certain ethnicities.

Focusing on how the data training the speech recognition engines is often limited, he concluded with a tease about how Speechmatics was ready to unveil the most powerful and accurate speech recognition engine ever released. This engine, he hinted, will close the gap in voice disparity to a greater extent than ever before by learning speech recognition from first principles.

The second talk, entitled Stop Worrying and Start Tackling AI Bias, was presented by Randeep Sidhu. Sidhu has worked for both the Department of Health and Social Care and Babylon Health, as well as working on last year’s NHS Test and Trace App. With a similar aim in mind to David’s talk, Sidhu’s main contention was that while, yes, there is an issue with data leading to AI bias, many of the thornier points around Ai bias lie elsewhere.

His solutions? First, that you can’t just build for the median. Sidhu argued this is how outliers are created and left behind. Next, he asked for much better recruitment across the board. With hiring and empowerment at the forefront of tackling bias. He also made it clear how crucial it was that management should take on feedback in this area. Ending on the note that AI will never be completely unbiased, Sidhu urged us to do our best. By scenario planning and spotting bias we can – and should – do all we can to reduce it.

These two different, intricate and complex speeches highlighted the best of this year’s AI Summit. Both were perfect examples of the conversations everyone in the tech industry should be having. If any of us are really going to do something bold, now’s the time to talk about it. And when we’re done talking, we’ve plenty of work to do.

Want to test out the most powerful and accurate speech recognition engine ever released?

Request your free trial to test out our speech recognition engine!

     

X