Nov 24, 2022 | Read time 3 min

What Does the AI Bill of Rights Mean for Tech Companies?

Our CEO, Katy Wigdahl, looks at the new AI Bill of Rights and what it means for companies with AI at the heart of everything they do.
What Does the AI Bill of Rights Mean for Tech Companies?

Making certain that artificial intelligence is helping rather than hindering society has been a concern for those of us in the field of technology for decades. Finding the balance between free enterprise and limiting regulations has been just as much of a challenge. As the current US administration releases its blueprint for holding AI to account, I’d like to take a moment to look at what the AI Bill of Rights means for tech companies like ours.

The Right Bill of Rights?

Negative stories of ‘AI gone wrong’ often capture the headlines, despite the very same technology helping us every day in all manner of ways. And while it’s important to remember the benefits of AI (and take with a pinch of salt stories of robots taking over the world), guidelines are crucial in maintaining checks and balances.

Earlier this year, I talked about the raft of regulations due to hit the industry in a piece asking if digital regulations would help unlock innovation. In the blog I concluded, yes, smaller businesses could benefit from some of the rules coming into effect, but we need to be careful how far some legislature goes. On the surface, the AI Bill of Rights – with its five principles – has the balance broadly right.

The Five Principles

With a focus on civil liberties and equal opportunities, The Bill itself identifies five principles to guide the design, use, and deployment of automated systems to protect us from the most harmful aspects of AI. These five include: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanations, and, finally, Human Alternatives, Consideration and Fallback.

As a non-regulatory blueprint, it’s difficult to see what the ultimate endpoint will be for companies working in tech. When will governments step in, for example? But as an early guide, it’s great to see this level of consideration for our industry, and equally great to see where our own rules and ethics marry up to these guidelines.

The Speechmatics Approach

There’s something incredibly rewarding in seeing so many parallels in the Bill of Rights blueprint to what we do at Speechmatics. For starters, when it comes to Data Privacy, we never train on customers’ data unless we’ve been given explicit permission to do so. When it comes to our self-service portal, we don’t keep any of the data from the audio or transcriptions, and by achieving our ISO27001 accreditation, we can feel secure in our own security.

As for Algorithmic Discrimination Protection, we’ve spent the past few years doing everything we can to make sure we’re accessible for all. With huge leaps in accuracy for a range of different voices, (including some results that made global headlines) a great deal of what’s in the bill allies with our own aim to understand every voice.

The First of Many

At this stage The AI Bill of Rights might be a uniquely American construct (and more a case of guidelines than a legal framework), but the EU isn’t far behind. And its incoming act is inflexible. The implementation of the AI Act is due in 2024, will be the first law from a major regulator anywhere in the world and could mean big changes for our industry. We’ll keep you updated on this in the future.

For now, as the leader of a team of experts in AI and deep learning, it’s good to know we’re already on the right track.

Katy Wigdahl, CEO, Speechmatics

Latest Articles

Carousel slide image
Use Cases

What Word Error Rate Is Acceptable for Legal Transcription?

Word error rate for legal transcription has no single acceptable threshold. But knowing how accuracy, audio quality, and review obligations connect to real legal risk is what separates a reliable transcript from a costly one.

Mieke Smith
Mieke SmithSenior Writer
Carousel slide image
Use Cases

The court reporter shortage crisis: data, causes, and what legal teams are doing about it

The court reporter shortage is reshaping litigation. Explore data, causes, and how legal teams are using digital reporting and AI transcription to adapt.

Tom Young
Tom YoungDigital Specialist
[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR