A$^3$T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing

He Bai, Renjie Zheng, Junkun Chen, Mingbo Ma, Xintong Li, Liang Huang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1399-1411, 2022.

Abstract

Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation. However, all the above tasks are in the direction of speech understanding, but for the inverse direction, speech synthesis, the potential of representation learning is yet to be realized, due to the challenging nature of generating high-quality speech. To address this problem, we propose our framework, Alignment-Aware Acoustic-Text Pretraining (A$^3$T), which reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality reconstructed spectrogram, which can be applied to the speech editing and unseen speaker TTS directly. Experiments show A$^3$T outperforms SOTA models on speech editing, and improves multi-speaker speech synthesis without the external speaker verification model.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-bai22d, title = {{A}$^3${T}: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing}, author = {Bai, He and Zheng, Renjie and Chen, Junkun and Ma, Mingbo and Li, Xintong and Huang, Liang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {1399--1411}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/bai22d/bai22d.pdf}, url = {https://proceedings.mlr.press/v162/bai22d.html}, abstract = {Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation. However, all the above tasks are in the direction of speech understanding, but for the inverse direction, speech synthesis, the potential of representation learning is yet to be realized, due to the challenging nature of generating high-quality speech. To address this problem, we propose our framework, Alignment-Aware Acoustic-Text Pretraining (A$^3$T), which reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality reconstructed spectrogram, which can be applied to the speech editing and unseen speaker TTS directly. Experiments show A$^3$T outperforms SOTA models on speech editing, and improves multi-speaker speech synthesis without the external speaker verification model.} }
Endnote
%0 Conference Paper %T A$^3$T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing %A He Bai %A Renjie Zheng %A Junkun Chen %A Mingbo Ma %A Xintong Li %A Liang Huang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-bai22d %I PMLR %P 1399--1411 %U https://proceedings.mlr.press/v162/bai22d.html %V 162 %X Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation. However, all the above tasks are in the direction of speech understanding, but for the inverse direction, speech synthesis, the potential of representation learning is yet to be realized, due to the challenging nature of generating high-quality speech. To address this problem, we propose our framework, Alignment-Aware Acoustic-Text Pretraining (A$^3$T), which reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality reconstructed spectrogram, which can be applied to the speech editing and unseen speaker TTS directly. Experiments show A$^3$T outperforms SOTA models on speech editing, and improves multi-speaker speech synthesis without the external speaker verification model.
APA
Bai, H., Zheng, R., Chen, J., Ma, M., Li, X. & Huang, L.. (2022). A$^3$T: Alignment-Aware Acoustic and Text Pretraining for Speech Synthesis and Editing. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:1399-1411 Available from https://proceedings.mlr.press/v162/bai22d.html.

Related Material