Our products are built on decades of research. The main research projects and papers to date are:
Large scale neural network language models. SMART award 710288. March 2013 – August 2014.
This research project developed the core capability for recurrent neural network training on GPUs. The resulting language models are fast and effective, typically resulting in 20% fewer errors.
Large Scale Neural Network Acoustic Models. SMART award 710556. October 2014 – March 2016.
Following on from our success in language modelling we applied the same deep learning techniques to acoustic modelling, achieving even greater improvements in accuracy.
An ultra-efficient decoder for automatic speech recognition. SMART award 710513. September 2014 – May 2016.
Building on the previous research project we realised that the traditional way to run speech recognition was no longer suited for our deep learning approach. We rewrote everything to achieve an eight times speed up, resulting in probably the most efficient speech recognition system ever built.