Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance

Heeseung Kim, Sungwon Kim, Sungroh Yoon
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11119-11133, 2022.

Abstract

We propose Guided-TTS, a high-quality text-to-speech (TTS) model that does not require any transcript of target speaker using classifier guidance. Guided-TTS combines an unconditional diffusion probabilistic model with a separately trained phoneme classifier for classifier guidance. Our unconditional diffusion model learns to generate speech without any context from untranscribed speech data. For TTS synthesis, we guide the generative process of the diffusion model with a phoneme classifier trained on a large-scale speech recognition dataset. We present a norm-based scaling method that reduces the pronunciation errors of classifier guidance in Guided-TTS. We show that Guided-TTS achieves a performance comparable to that of the state-of-the-art TTS model, Grad-TTS, without any transcript for LJSpeech. We further demonstrate that Guided-TTS performs well on diverse datasets including a long-form untranscribed dataset.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kim22d, title = {Guided-{TTS}: A Diffusion Model for Text-to-Speech via Classifier Guidance}, author = {Kim, Heeseung and Kim, Sungwon and Yoon, Sungroh}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11119--11133}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kim22d/kim22d.pdf}, url = {https://proceedings.mlr.press/v162/kim22d.html}, abstract = {We propose Guided-TTS, a high-quality text-to-speech (TTS) model that does not require any transcript of target speaker using classifier guidance. Guided-TTS combines an unconditional diffusion probabilistic model with a separately trained phoneme classifier for classifier guidance. Our unconditional diffusion model learns to generate speech without any context from untranscribed speech data. For TTS synthesis, we guide the generative process of the diffusion model with a phoneme classifier trained on a large-scale speech recognition dataset. We present a norm-based scaling method that reduces the pronunciation errors of classifier guidance in Guided-TTS. We show that Guided-TTS achieves a performance comparable to that of the state-of-the-art TTS model, Grad-TTS, without any transcript for LJSpeech. We further demonstrate that Guided-TTS performs well on diverse datasets including a long-form untranscribed dataset.} }
Endnote
%0 Conference Paper %T Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance %A Heeseung Kim %A Sungwon Kim %A Sungroh Yoon %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kim22d %I PMLR %P 11119--11133 %U https://proceedings.mlr.press/v162/kim22d.html %V 162 %X We propose Guided-TTS, a high-quality text-to-speech (TTS) model that does not require any transcript of target speaker using classifier guidance. Guided-TTS combines an unconditional diffusion probabilistic model with a separately trained phoneme classifier for classifier guidance. Our unconditional diffusion model learns to generate speech without any context from untranscribed speech data. For TTS synthesis, we guide the generative process of the diffusion model with a phoneme classifier trained on a large-scale speech recognition dataset. We present a norm-based scaling method that reduces the pronunciation errors of classifier guidance in Guided-TTS. We show that Guided-TTS achieves a performance comparable to that of the state-of-the-art TTS model, Grad-TTS, without any transcript for LJSpeech. We further demonstrate that Guided-TTS performs well on diverse datasets including a long-form untranscribed dataset.
APA
Kim, H., Kim, S. & Yoon, S.. (2022). Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11119-11133 Available from https://proceedings.mlr.press/v162/kim22d.html.

Related Material