An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems

Hitesh Tulsiani, David Chan, Shalini Ghosh, Garima Lalwani, Prabhat Pandey, Ankish Bansal, Sri Garimella, Ariya Rastrow, Björn Hoffmeister
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:48823-48836, 2024.

Abstract

Dialog systems, such as voice assistants, are expected to engage with users in complex, evolving conversations. Unfortunately, traditional automatic speech recognition (ASR) systems deployed in such applications are usually trained to recognize each turn independently and lack the ability to adapt to the conversational context or incorporate user feedback. In this work, we introduce a general framework for ASR in dialog systems that can go beyond learning from single-turn utterances and learn over time how to adapt to both explicit supervision and implicit user feedback present in multi-turn conversations. We accomplish that by leveraging advances in student-teacher learning and context-aware dialog processing, and designing contrastive self-supervision approaches with Ohm, a new online hard-negative mining approach. We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems, and up to 26% on public synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-tulsiani24a, title = {An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems}, author = {Tulsiani, Hitesh and Chan, David and Ghosh, Shalini and Lalwani, Garima and Pandey, Prabhat and Bansal, Ankish and Garimella, Sri and Rastrow, Ariya and Hoffmeister, Bj\"{o}rn}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {48823--48836}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/tulsiani24a/tulsiani24a.pdf}, url = {https://proceedings.mlr.press/v235/tulsiani24a.html}, abstract = {Dialog systems, such as voice assistants, are expected to engage with users in complex, evolving conversations. Unfortunately, traditional automatic speech recognition (ASR) systems deployed in such applications are usually trained to recognize each turn independently and lack the ability to adapt to the conversational context or incorporate user feedback. In this work, we introduce a general framework for ASR in dialog systems that can go beyond learning from single-turn utterances and learn over time how to adapt to both explicit supervision and implicit user feedback present in multi-turn conversations. We accomplish that by leveraging advances in student-teacher learning and context-aware dialog processing, and designing contrastive self-supervision approaches with Ohm, a new online hard-negative mining approach. We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems, and up to 26% on public synthetic data.} }
Endnote
%0 Conference Paper %T An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems %A Hitesh Tulsiani %A David Chan %A Shalini Ghosh %A Garima Lalwani %A Prabhat Pandey %A Ankish Bansal %A Sri Garimella %A Ariya Rastrow %A Björn Hoffmeister %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-tulsiani24a %I PMLR %P 48823--48836 %U https://proceedings.mlr.press/v235/tulsiani24a.html %V 235 %X Dialog systems, such as voice assistants, are expected to engage with users in complex, evolving conversations. Unfortunately, traditional automatic speech recognition (ASR) systems deployed in such applications are usually trained to recognize each turn independently and lack the ability to adapt to the conversational context or incorporate user feedback. In this work, we introduce a general framework for ASR in dialog systems that can go beyond learning from single-turn utterances and learn over time how to adapt to both explicit supervision and implicit user feedback present in multi-turn conversations. We accomplish that by leveraging advances in student-teacher learning and context-aware dialog processing, and designing contrastive self-supervision approaches with Ohm, a new online hard-negative mining approach. We show that leveraging our new framework compared to traditional training leads to relative WER reductions of close to 10% in real-world dialog systems, and up to 26% on public synthetic data.
APA
Tulsiani, H., Chan, D., Ghosh, S., Lalwani, G., Pandey, P., Bansal, A., Garimella, S., Rastrow, A. & Hoffmeister, B.. (2024). An Efficient Self-Learning Framework For Interactive Spoken Dialog Systems. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:48823-48836 Available from https://proceedings.mlr.press/v235/tulsiani24a.html.

Related Material