Jun 17, 2019 | Read time 2 min

Can Peppa Pig really educate future generations?

“International studies showed that primary school children who watched television with subtitles were more likely to reach expected standards of reading — and more likely to handle advanced texts — than those who did not.”
Header image

When you think of subtitles – or captions – what springs to mind? That they are useful when you can’t turn the sound up on your phone in a public space?

That they provide accessibility and understanding to videos and tv programmes for the deaf or hard of hearing community. Me too… until recently when I read an article in The Times by Greg Hurst. The article is titled Subtitles on children’s TV boost reading skills.

The article states that

“International studies showed that primary school children who watched television with subtitles were more likely to reach expected standards of reading — and more likely to handle advanced texts — than those who did not.”

In a generation where children are exposed to hours of television programmes or on-demand content on tablets, it’s a wonder why this has never been thought of before.

It’s no secret that automatic speech recognition (ASR) has improved.

Leaps and bounds over the past few years and is increasingly being used across all platforms to subtitle content, improving accessibility for the deaf community or situationally disadvantaged people. I’ve worked closely with subtitling technology and of course, ASR for over two years and I admit, I didn’t see this one coming. The application of subtitles automatically switched on for children’s programmes as a learning tool has never come up. So why not?

Well, the technology is there to do it, but it is ultimately at the discretion of the broadcasters to switch on subtitles by default. And it doesn’t come without its challenges. Whilst ASR is a cost-effective solution to providing subtitles, there are still challenges when it comes to background noise in the programme affecting the quality of audio going into the ASR engine. It is also important to ensure punctuation is used correctly in the subtitles to provide children with the right education. These challenges must be taken into consideration with regards to improving child literacy.

We work with educational platforms such as Instructure and Udemy that provide educational learning tools to people globally. Specifically, we work with Instructure to subtitle educational videos to ensure all content on their video platform – Arc – is accessible. Ensuring the videos are subtitled encourages self-learning of reading and listening at the same time but also expands the audience to include those that are deaf, hard of hearing, disabled or situationally disadvantaged. The introduction of “always on” subtitles for children’s programmes can only expand and enhance the opportunities that are already available on the market today.

Either way, in a world where we are always striving for more knowledge and faster learning, I’m excited to see another use case coming to the fold for ASR and subtitling that improves accessibility and to witness our innovative technology advancements continuing to have a positive impact on society. It seems that Peppa Pig really could educate future generations in more ways than we thought.

Georgina Robertson, Senior Communications Manager, Speechmatics

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate