ELF: Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis

Jungil Kong, Junmo Lee, Jeongmin Kim, Beomjeong Kim, Jihoon Park, Dohee Kong, Changheon Lee, Sangjin Kim
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:25176-25190, 2024.

Abstract

In this work, we propose a novel method for modeling numerous speakers, which enables expressing the overall characteristics of speakers in detail like a trained multi-speaker model without additional training on the target speaker’s dataset. Although various works with similar purposes have been actively studied, their performance has not yet reached that of trained multi-speaker models due to their fundamental limitations. To overcome previous limitations, we propose effective methods for feature learning and representing target speakers’ speech characteristics by discretizing the features and conditioning them to a speech synthesis model. Our method obtained a significantly higher similarity mean opinion score (SMOS) in subjective similarity evaluation than seen speakers of a high-performance multi-speaker model, even with unseen speakers. The proposed method also outperforms a zero-shot method by significant margins. Furthermore, our method shows remarkable performance in generating new artificial speakers. In addition, we demonstrate that the encoded latent features are sufficiently informative to reconstruct an original speaker’s speech completely. It implies that our method can be used as a general methodology to encode and reconstruct speakers’ characteristics in various tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-kong24c, title = {{ELF}: Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis}, author = {Kong, Jungil and Lee, Junmo and Kim, Jeongmin and Kim, Beomjeong and Park, Jihoon and Kong, Dohee and Lee, Changheon and Kim, Sangjin}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {25176--25190}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/kong24c/kong24c.pdf}, url = {https://proceedings.mlr.press/v235/kong24c.html}, abstract = {In this work, we propose a novel method for modeling numerous speakers, which enables expressing the overall characteristics of speakers in detail like a trained multi-speaker model without additional training on the target speaker’s dataset. Although various works with similar purposes have been actively studied, their performance has not yet reached that of trained multi-speaker models due to their fundamental limitations. To overcome previous limitations, we propose effective methods for feature learning and representing target speakers’ speech characteristics by discretizing the features and conditioning them to a speech synthesis model. Our method obtained a significantly higher similarity mean opinion score (SMOS) in subjective similarity evaluation than seen speakers of a high-performance multi-speaker model, even with unseen speakers. The proposed method also outperforms a zero-shot method by significant margins. Furthermore, our method shows remarkable performance in generating new artificial speakers. In addition, we demonstrate that the encoded latent features are sufficiently informative to reconstruct an original speaker’s speech completely. It implies that our method can be used as a general methodology to encode and reconstruct speakers’ characteristics in various tasks.} }
Endnote
%0 Conference Paper %T ELF: Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis %A Jungil Kong %A Junmo Lee %A Jeongmin Kim %A Beomjeong Kim %A Jihoon Park %A Dohee Kong %A Changheon Lee %A Sangjin Kim %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-kong24c %I PMLR %P 25176--25190 %U https://proceedings.mlr.press/v235/kong24c.html %V 235 %X In this work, we propose a novel method for modeling numerous speakers, which enables expressing the overall characteristics of speakers in detail like a trained multi-speaker model without additional training on the target speaker’s dataset. Although various works with similar purposes have been actively studied, their performance has not yet reached that of trained multi-speaker models due to their fundamental limitations. To overcome previous limitations, we propose effective methods for feature learning and representing target speakers’ speech characteristics by discretizing the features and conditioning them to a speech synthesis model. Our method obtained a significantly higher similarity mean opinion score (SMOS) in subjective similarity evaluation than seen speakers of a high-performance multi-speaker model, even with unseen speakers. The proposed method also outperforms a zero-shot method by significant margins. Furthermore, our method shows remarkable performance in generating new artificial speakers. In addition, we demonstrate that the encoded latent features are sufficiently informative to reconstruct an original speaker’s speech completely. It implies that our method can be used as a general methodology to encode and reconstruct speakers’ characteristics in various tasks.
APA
Kong, J., Lee, J., Kim, J., Kim, B., Park, J., Kong, D., Lee, C. & Kim, S.. (2024). ELF: Encoding Speaker-Specific Latent Speech Feature for Speech Synthesis. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:25176-25190 Available from https://proceedings.mlr.press/v235/kong24c.html.

Related Material