Apr 22, 2025 | Read time 2 min

Barometer teams up with Speechmatics to drive accuracy and inclusion in podcast intelligence

Harnessing advanced speech recognition to power precise transcription, brand safety, and multilingual insights in audio and digital media.
Barometer and Speechmatics partnership header
Speechmatics
SpeechmaticsEditorial team

A strategic partnership focused on accuracy and inclusion

Today we are thrilled to announce that Barometer has selected Speechmatics as their speech technology partner.

Barometer, known for delivering AI-driven insights to help business understand communication strategies, has teamed up with Speechmatics to help unlock the value of the human voice.

At Barometer, inclusion and accuracy are core pillars, and understanding all voices, regardless of accent, dialect or demographic. This overlap with Speechmatics' own mission to 'Understand Every Voice' makes for a perfect partnership with an aligned focus.

As a founder and member of the MOSAIC Collective, Barometer strives to provide a platform for diverse voices in audio. Additionally, their membership in the ALIVE Network supports black creatives in the media space through the ALIVE podcast network.

Promoting inclusivity across all media forms, Barometer will leverage Speechmatics speech technology to expand across geographies, enhancing both the accuracy and inclusivity of the platform.

Lauren King, Chief Marketing Officer at Speechmatics commented, “We are thrilled to partner with Barometer, a company that shares our commitment to accuracy and inclusion. This collaboration will empower us to bring our advanced speech recognition technology to even more diverse voices around the world. Speechmatics is excited to help support the bold vision that founder and CEO, Tamara Zubatiy, has in transforming the podcast industry”.

Garbage in, garbage out

Despite the emergence of new transcription vendors fueled by the rise of Large Language Models, Barometer selected Speechmatics for its proprietary dataset, trained on millions of hours of manually transcribed text.

This unique advantage places Speechmatics ahead of competitors based on non-proprietary, open-sourced data.

As the adage goes, "garbage in, garbage out."

Higher transcription quality leads to fewer errors, and, consequently, more accurate analysis. Speechmatics achieve consistent, reliable and inclusive transcription - regardless of dialect.

"The decision was influenced by several factors," explained the Barometer team.

"We found that Speechmatics' transcription had a lower word error rate compared to other suppliers. Specifically, we considered the accuracy of transcriptions involving voices with accents, older adult voices, and voices from diverse backgrounds. Fewer errors result in better analysis, allowing Barometer to more accurately interpret the context of content."

Another significant factor in the decision was Speechmatics' international focus. Their support for a variety of languages, including the first bilingual model for Spanglish, aligns with Barometer's diverse linguistic needs.

This consideration is essential for the nuanced and individual world of podcasting, where realistic speech and diverse voices must be accurately captured.

Grant Nelson, Barometer's CTO, remarked, "This partnership allows Barometer to continue delivering best-in-class brand suitability and contextual targeting data for the podcast advertising ecosystem." CEO Tamara Zubatiy added, "It also supports our expansion beyond audio into new media channels, including YouTube, digital, and more."

"This partnership allows Barometer to continue delivering best-in-class brand suitability and contextual targeting data for the podcast advertising ecosystem."

Grant NelsonChief Technology Officer at Barometer
[alt: Grant Nelson Barometer]

Power your products with enterprise-grade Voice AI

We handle the speech, you deliver conversations that matter.

Latest Articles

Carousel slide image
Use Cases

What Word Error Rate Is Acceptable for Legal Transcription?

Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.

Mieke Smith
Mieke SmithSenior Writer
Carousel slide image
Use Cases

The court reporter shortage crisis: data, causes, and what legal teams are doing about it

The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Tom Young
Tom YoungDigital Specialist
[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR