Jun 20, 2019 | Read time 3 min

Speechmatics has a new cloud offering

Speechmatics has a new cloud offering for cloud-based speech transcription which is hosted in Azure Kubernetes Service (AKS).
Public and Private Cloud
Speechmatics has a new cloud offering for cloud-based speech transcription which is hosted in Azure Kubernetes Service (AKS).

The new service delivers extensive scalability and elasticity to support dynamic load whilst also providing feature parity across all of Speechmatics’ deployment options. The new cloud offering is intended for business partners on service contracts, focusing on a single API endpoint. The cloud API simplifies our customers’ workflows and integrations with other systems, whether hosted in public clouds or private data centers, providing the latest JSON and plain TXT transcription output. To ensure the privacy of data for our customers, all jobs put through the updated cloud offering are deleted 7 days after completion, however, customers can manually delete a job before this time if desired.*

Why did we choose Azure Kubernetes Service? According to a Cisco survey, 73% of organizations say nearly all of their apps will be cloud-based by 2020 and at Speechmatics, we want to offer our customers the best hosting services to meet their business needs. Why Kubernetes? Kubernetes was selected to power Speechmatics’ new cloud offering as it offers many new capabilities for our customers. It has the ability to power those who have either outgrown their Virtual Appliances and wish to deploy in their own data centers and virtual private cloud environments, or others who wish to integrate Speechmatics’ automatic speech recognition (ASR) into their existing cloud strategy. Kubernetes is the most effective platform for writing a cloud microservice architecture application which is not tied to any specific cloud provider (public or private). This microservices architecture is shared with Speechmatics’ Batch Virtual Appliance, so we’re able to build the product using the same architecture and APIs. It can also be deployed on different platforms depending on our customers’ scalability requirements. Due to its wide adoption, Kubernetes has a growing community of companies and open source projects that help to extend its capabilities. Being part of this emerging community means that Speechmatics can evolve its product offering quicker than ever before and with less complexity. Kubernetes helps us to provide a cloud service that is built on modern principles with an elastically scalable architecture. Speechmatics’ cloud offering Speechmatics always wants to provide the best possible transcription service to our customers. The new cloud offering means that customers no longer have to worry about operational considerations such as capacity planning and maintaining service availability. Speechmatics removes the complexity and cost of deploying hardware in customers’ own or third-party infrastructures, by providing all of the maintenance, support and troubleshooting. The availability of Speechmatics’ cloud offering and internal support accelerate customers’ time to market and removes the pain of having to manage their own deployment. Our dedicated Support team is always available to ensure our customers are successful. Speechmatics’ cutting-edge deep neural network architecture means our transcripts have world-leading accuracy across all platforms. As well as unrivalled accuracy, Speechmatics’ cloud offering has complete feature parity with our Batch Appliance and Batch Containers. This is possible as AKS allows the same core speech transcription component to be used in the cloud service as in our Virtual Appliances. This means all customers will now have immediate access to new features that are developed across each of our deployment options, including Advanced Punctuation, Custom Dictionary and Channel Diarization. Customers will also have instant access to new language and release updates.

Why the cloud? Businesses are shifting their operations to the cloud for a number of reasons, including:

  • To cut operational costs

  • To maintain a consistent working process

  • To collaborate in large and decentralised teams

  • To save the time needed for setup and training

  • To facilitate access to data

  • It is mobile-optimised and independent from the devices they’re using

  • To make it easier to connect applications with each other

  • To reduce the burden of upgrading, maintaining, and safeguarding data

  • To keep them in line with trends

  • Interested in speech recognition?

If you like the sound of Speechmatics’ new cloud offering, get in touch – we’d love to hear about your unique use case and how we can help with the adoption of ASR into your business.

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate