Mar 13, 2020 | Read time 2 min

what3words launches seamless app voice entry on Apple Watch

Speechmatics has relocated its global HQ to the Cambridge Science Park, a center for innovation, science, education and entrepreneurialism. 
Automotive Command Control Car
what3words for Apple Watch is voice enabled, allowing users to enter a what3words address hands-free in any of eight languages to date, including English, Arabic, German, Hindi, Japanese, Korean, Mandarin and Spanish.

The what3words app has introduced simple, accurate location and voice technology for Apple Watch. what3words is the simplest way to talk about and share location, anywhere in the world. The app has divided the world into 57 trillion 3m x 3m squares, giving each square a new, simplified address made up of three dictionary words. For example, the exact 3 meter square of the popular Fort Point view of the Golden Gate Bridge in San Francisco can be found at ///usual.trying.highs.

what3words hyper-accurate locations are the only address system built for voice input. The industry-leading speech recognition from what3words Voice API powered by Speechmatics provides easy and simple voice input for addresses. Other functionality includes viewing the 3 word address of your current location by pressing on the watch screen, sharing any what3words address to an iOS device via iMessage, and navigating using Apple Maps or what3words’ Compass feature. Josh Wigmore, Chief Product Officer (CPO) at what3words explained that; “Using voice is one of the quickest ways to enter a what3words address, and with what3words for Apple Watch, users can now do this right from their wrist without needing their phone.

You can quickly share your current address to meet with friends, or look one up with voice and navigate there easily with Apple Maps or Compass. what3words for Apple Watch is powered by our new what3words Voice API so it’s fast and accurate, currently supporting eight languages with more coming soon.” To use voice to enter precise locations easily, download the what3words app from the App Store and tap the what3words icon on the watch screen to access the new features. The what3words app for Apple Watch requires watchOS 6 or later.

How to enter a what3words address by voice Open the app, then tap on the what3words address that appeared. A list will appear. Select ‘Language’, then choose the language you’d like to use. This will take you back to the main app screen. Tap the red icon with the speech bubble, then say the what3words address you’d like to enter. How to see the what3words address for your current location Open the app, a what3words address will appear. To refresh it, firmly press the what3words address on your screen, then tap the refresh icon. The what3words address for your current location will appear. How to share a what3words address via iMessage Enter a what3words address or tap on the what3words address for your current location. A list will appear. Select Share. This will open your contacts, select one to send the what3words address to via iMessage. How to navigate to a what3words address via Apple Maps Enter a what3words address or tap on the what3words address for your current location. A list will appear. Select Navigate, then select Maps to navigate to that location using Apple Maps. How to navigate to a what3words address with the Compass feature Enter a what3words address or tap on the what3words address for your current location. A list will appear. Select Navigate, then select Compass to navigate to that location using the Compass feature.

A little arrow will show you the direction of that what3words address, and you’ll be able to see the distance between you and that location. When you arrive to that square, the red circle around the what3words address will turn green.

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate