Feb 20, 2018 | Read time 2 min

Automatic speech-to-text service without the hassle

Header image

Why are speech-to-text services hard to use?

I came across a blog from Luis Quintanilla where he produced a console app to transcribe podcasts using the Microsoft Speech API.

The thing that instantly struck me was the hoops that he had to jump though to transcribe the podcast. The app had to:

  • Transcode the audio to wav as Microsoft don’t support MP3 files

  • Break the audio up into small chunks so it could be processed

  • Collate the results and build back into a single transcript

Why, why, why? Every one of those steps is a chance for something to go wrong!

This makes a whole load of work to produce a stable, reliable process to make use of a utility service to get transcripts from audio. After all its called automatic speech recognition so you need it to be automatable reliably right?

Why does it have to be that complex?

Good news… It doesn’t.

Speechmatics believe that it’s hard work and frustrating when using a utility becomes the hard part of a task, rather than being a tool that simply helps you solve the problems. So, the Speechmatics API has been designed to make it easy to consume, not get in the way. With the Speechmatics API you can:

  • Use the original file (we support just about every file format you can think of)

  • Support the file upload in one chunk (up to 2 hours or 4GB file size)

  • If you speak to us nicely we will even let you point directly at the URL of the file you want to transcribe and we will pull it for you so you don’t even have to download it.

What does that mean?

Firstly, why bother using an API at all, just register for using our web app and use the app to transcribe your audio directly and download the result or get the transcript emailed back to you.

Or use the API key after you register and use a single HTTP Post command to request the transcript and a single HTTP to retrieve the result (API docs are available).

Why not give it a go? If you register, we will give you 60-minutes free transcription.

Other services that sound simple can be hard to use so you might not even get to the transcript. Ours is so easy to use and I am sure that the results will be great too.

Ian Firth, Speechmatics

Latest Articles

Carousel slide image
Use Cases

The court reporter shortage crisis: data, causes, and what legal teams are doing about it

The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Tom Young
Tom YoungDigital Specialist
Carousel slide image
Use Cases

What Word Error Rate Is Acceptable for Legal Transcription?

Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.

Tom Young
Tom YoungDigital Specialist
[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR