Mar 24, 2022 | Read time 4 min

The Surprising Effects of Emotion in Speech-to-Text Technology

We tested the effect of emotion in speech-to-text by using our Autonomous Speech Recognition technology with the iconic sitcom ‘Friends’.
Emotion blog image 595x841 v1
Benedetta Cevoli
Benedetta CevoliSenior Machine Learning Engineer

The Surprising Effects of Emotion in Speech-to-Text Technology - block 1

At Speechmatics, we're constantly working to grow our Autonomous Speech Recognition (ASR) capabilities. We've mastered 33 languages, but the research doesn't stop there. To understand language in a more comprehensive, authentic way, we must look at the role of emotion in speech-to-text.

Speech is a naturally rich tapestry of ever-changing emotions. It's a crucial part of human communication, so the boundaries of emotion in speech-to-text will always push. Naturally, we tested our ASR on the beloved TV show Friends' character's voices.

Set in 90's New York, the iconic sitcom follows Ross, Rachel, Monica, Chandler, Phoebe, and Joey through the trials and tribulations of adulthood. The show's rich storytelling ensures a vast array of emotions throughout the seasons, making it the perfect subject for our test.

To analyze the accuracy of our ASR, we used the MELD dataset, which includes 1400 dialogues and 13000 utterances from the show, accompanied by emotion and sentiment annotations. We calculated transcription accuracy for each utterance using Word Error Rate. We then measured ASR's accuracy levels for each character's emotions: fear, neutral, anger, disgust, joy, positive, and negative surprise.

Fear vs Speech Recognition Technology

Amongst the different emotions, fear stands out the most. Compared to 85% neutral accuracy, fear registered an average of 78% - a 7% absolute difference. 

Even if accuracy varies across the other emotional states, none of these differences is statistically significant. You can see in the figure above that the error bars are overlapping, which means that these minor differences are likely due to chance, and in reality, transcription accuracy for neutral and anger, for example, are comparable. But if we take a closer look, we can see the pattern of results changes.   

Monica and Rachel's accuracy drops to 67% and 65% for fear, but not for the rest. Phoebe, Ross, Chandler, and Joey all have a fear accuracy comparable to neutral accuracy. Again, even if these slightly vary, none of the differences is statistically significant – it's probably just chance. But Rachel and Monica's fear accuracy is statistically different to neutral accuracy.

The Difference Between Positive and Negative Surprise block

The Difference Between Positive and Negative Surprise

Phoebe, played by Lisa Kudrow, has an emotionally rich past. She's experienced just about every emotion on the spectrum, so it's no surprise that her test results provide the best insight into the impact of emotion in speech-to-text technology.

Whereas Phoebe's negative surprised tone was 93% accurate, positive surprise only recorded a 70% accuracy rating. However, this issue doesn't spread beyond Phoebe, as every other character scored similarly high accuracy levels for both.

Interestingly, Joey's positive surprise accuracy levels were the same as Phoebe's negative surprise – 93%. It's the same emotion, but the character's sentiment drastically impacts the way they speak – as indirectly shown by the difference in accuracy levels. While more work is needed to uncover the true reason as to why that is, highlighting the effect of emotion in speech-to-text helps us better understand the true capabilities of ASR and language in general.

Phoebe's positive surprise and Monica & Rachel's fear are what ASR struggles with the most. It begs the question: why does ASR struggle with these emotional tones? Is it because people's voices reach very high pitch ranges in these emotional states?

One thing is clear. We all have different, unique voices, but they also change constantly. And, amongst many factors, emotions greatly determine these changes. But our voices change in other ways too. Monica and Rachel's fearful speech is similarly unique to Phoebe's when positively surprised.

How Can Emotion Improve Speech-to-Text?

Emotions are as complex as our voices - they are a unique array of variations that characterize our lives daily. ASR must adapt to the different emotions and the way we express all the individual differences in our voices.

If our technology only focused on neutral speech, we would be missing out on many variations of everyday life. We're looking to incorporate as much diversity into our systems as possible with continued research. That doesn't just include different languages and dialects. People are happy, stressed, excited, frightened, and angry, or perhaps everything all at once.

Emotions change the way you speak; we need to change how we listen.

Benedetta Cevoli - Data Science Engineer, Speechmatics

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate