Lookahead When It Matters: Adaptive Non-causal Transformers for Streaming Neural Transducers

Grant Strimel, Yi Xie, Brian John King, Martin Radfar, Ariya Rastrow, Athanasios Mouchtaris
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:32654-32676, 2023.

Abstract

Streaming speech recognition architectures are employed for low-latency, real-time applications. Such architectures are often characterized by their causality. Causal architectures emit tokens at each frame, relying only on current and past signal, while non-causal models are exposed to a window of future frames at each step to increase predictive accuracy. This dichotomy amounts to a trade-off for real-time Automatic Speech Recognition (ASR) system design: profit from the low-latency benefit of strictly-causal architectures while accepting predictive performance limitations, or realize the modeling benefits of future-context models accompanied by their higher latency penalty. In this work, we relax the constraints of this choice and present the Adaptive Non-Causal Attention Transducer (ANCAT). Our architecture is non-causal in the traditional sense, but executes in a low-latency, streaming manner by dynamically choosing when to rely on future context and to what degree within the audio stream. The resulting mechanism, when coupled with our novel regularization algorithms, delivers comparable accuracy to non-causal configurations while improving significantly upon latency, closing the gap with their causal counterparts. We showcase our design experimentally by reporting comparative ASR task results with measures of accuracy and latency on both publicly accessible and production-scale, voice-assistant datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-strimel23a, title = {Lookahead When It Matters: Adaptive Non-causal Transformers for Streaming Neural Transducers}, author = {Strimel, Grant and Xie, Yi and King, Brian John and Radfar, Martin and Rastrow, Ariya and Mouchtaris, Athanasios}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {32654--32676}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/strimel23a/strimel23a.pdf}, url = {https://proceedings.mlr.press/v202/strimel23a.html}, abstract = {Streaming speech recognition architectures are employed for low-latency, real-time applications. Such architectures are often characterized by their causality. Causal architectures emit tokens at each frame, relying only on current and past signal, while non-causal models are exposed to a window of future frames at each step to increase predictive accuracy. This dichotomy amounts to a trade-off for real-time Automatic Speech Recognition (ASR) system design: profit from the low-latency benefit of strictly-causal architectures while accepting predictive performance limitations, or realize the modeling benefits of future-context models accompanied by their higher latency penalty. In this work, we relax the constraints of this choice and present the Adaptive Non-Causal Attention Transducer (ANCAT). Our architecture is non-causal in the traditional sense, but executes in a low-latency, streaming manner by dynamically choosing when to rely on future context and to what degree within the audio stream. The resulting mechanism, when coupled with our novel regularization algorithms, delivers comparable accuracy to non-causal configurations while improving significantly upon latency, closing the gap with their causal counterparts. We showcase our design experimentally by reporting comparative ASR task results with measures of accuracy and latency on both publicly accessible and production-scale, voice-assistant datasets.} }
Endnote
%0 Conference Paper %T Lookahead When It Matters: Adaptive Non-causal Transformers for Streaming Neural Transducers %A Grant Strimel %A Yi Xie %A Brian John King %A Martin Radfar %A Ariya Rastrow %A Athanasios Mouchtaris %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-strimel23a %I PMLR %P 32654--32676 %U https://proceedings.mlr.press/v202/strimel23a.html %V 202 %X Streaming speech recognition architectures are employed for low-latency, real-time applications. Such architectures are often characterized by their causality. Causal architectures emit tokens at each frame, relying only on current and past signal, while non-causal models are exposed to a window of future frames at each step to increase predictive accuracy. This dichotomy amounts to a trade-off for real-time Automatic Speech Recognition (ASR) system design: profit from the low-latency benefit of strictly-causal architectures while accepting predictive performance limitations, or realize the modeling benefits of future-context models accompanied by their higher latency penalty. In this work, we relax the constraints of this choice and present the Adaptive Non-Causal Attention Transducer (ANCAT). Our architecture is non-causal in the traditional sense, but executes in a low-latency, streaming manner by dynamically choosing when to rely on future context and to what degree within the audio stream. The resulting mechanism, when coupled with our novel regularization algorithms, delivers comparable accuracy to non-causal configurations while improving significantly upon latency, closing the gap with their causal counterparts. We showcase our design experimentally by reporting comparative ASR task results with measures of accuracy and latency on both publicly accessible and production-scale, voice-assistant datasets.
APA
Strimel, G., Xie, Y., King, B.J., Radfar, M., Rastrow, A. & Mouchtaris, A.. (2023). Lookahead When It Matters: Adaptive Non-causal Transformers for Streaming Neural Transducers. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:32654-32676 Available from https://proceedings.mlr.press/v202/strimel23a.html.

Related Material