Translatotron 2: High-quality direct speech-to-speech translation with voice preservation

Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10120-10134, 2022.

Abstract

We present Translatotron 2, a neural direct speech-to-speech translation model that can be trained end-to-end. Translatotron 2 consists of a speech encoder, a linguistic decoder, an acoustic synthesizer, and a single attention module that connects them together. Experimental results on three datasets consistently show that Translatotron 2 outperforms the original Translatotron by a large margin on both translation quality (up to +15.5 BLEU) and speech generation quality, and approaches the same of cascade systems. In addition, we propose a simple method for preserving speakers’ voices from the source speech to the translation speech in a different language. Unlike existing approaches, the proposed method is able to preserve each speaker’s voice on speaker turns without requiring for speaker segmentation. Furthermore, compared to existing approaches, it better preserves speaker’s privacy and mitigates potential misuse of voice cloning for creating spoofing audio artifacts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-jia22b, title = {Translatotron 2: High-quality direct speech-to-speech translation with voice preservation}, author = {Jia, Ye and Ramanovich, Michelle Tadmor and Remez, Tal and Pomerantz, Roi}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10120--10134}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/jia22b/jia22b.pdf}, url = {https://proceedings.mlr.press/v162/jia22b.html}, abstract = {We present Translatotron 2, a neural direct speech-to-speech translation model that can be trained end-to-end. Translatotron 2 consists of a speech encoder, a linguistic decoder, an acoustic synthesizer, and a single attention module that connects them together. Experimental results on three datasets consistently show that Translatotron 2 outperforms the original Translatotron by a large margin on both translation quality (up to +15.5 BLEU) and speech generation quality, and approaches the same of cascade systems. In addition, we propose a simple method for preserving speakers’ voices from the source speech to the translation speech in a different language. Unlike existing approaches, the proposed method is able to preserve each speaker’s voice on speaker turns without requiring for speaker segmentation. Furthermore, compared to existing approaches, it better preserves speaker’s privacy and mitigates potential misuse of voice cloning for creating spoofing audio artifacts.} }
Endnote
%0 Conference Paper %T Translatotron 2: High-quality direct speech-to-speech translation with voice preservation %A Ye Jia %A Michelle Tadmor Ramanovich %A Tal Remez %A Roi Pomerantz %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-jia22b %I PMLR %P 10120--10134 %U https://proceedings.mlr.press/v162/jia22b.html %V 162 %X We present Translatotron 2, a neural direct speech-to-speech translation model that can be trained end-to-end. Translatotron 2 consists of a speech encoder, a linguistic decoder, an acoustic synthesizer, and a single attention module that connects them together. Experimental results on three datasets consistently show that Translatotron 2 outperforms the original Translatotron by a large margin on both translation quality (up to +15.5 BLEU) and speech generation quality, and approaches the same of cascade systems. In addition, we propose a simple method for preserving speakers’ voices from the source speech to the translation speech in a different language. Unlike existing approaches, the proposed method is able to preserve each speaker’s voice on speaker turns without requiring for speaker segmentation. Furthermore, compared to existing approaches, it better preserves speaker’s privacy and mitigates potential misuse of voice cloning for creating spoofing audio artifacts.
APA
Jia, Y., Ramanovich, M.T., Remez, T. & Pomerantz, R.. (2022). Translatotron 2: High-quality direct speech-to-speech translation with voice preservation. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10120-10134 Available from https://proceedings.mlr.press/v162/jia22b.html.

Related Material