Fitting New Speakers Based on a Short Untranscribed Sample

Eliya Nachmani, Adam Polyak, Yaniv Taigman, Lior Wolf
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3683-3691, 2018.

Abstract

Learning-based Text To Speech systems have the potential to generalize from one speaker to the next and thus require a relatively short sample of any new voice. However, this promise is currently largely unrealized. We present a method that is designed to capture a new speaker from a short untranscribed audio sample. This is done by employing an additional network that given an audio sample, places the speaker in the embedding space. This network is trained as part of the speech synthesis system using various consistency losses. Our results demonstrate a greatly improved performance on both the dataset speakers, and, more importantly, when fitting new voices, even from very short samples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-nachmani18a, title = {Fitting New Speakers Based on a Short Untranscribed Sample}, author = {Nachmani, Eliya and Polyak, Adam and Taigman, Yaniv and Wolf, Lior}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3683--3691}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/nachmani18a/nachmani18a.pdf}, url = {https://proceedings.mlr.press/v80/nachmani18a.html}, abstract = {Learning-based Text To Speech systems have the potential to generalize from one speaker to the next and thus require a relatively short sample of any new voice. However, this promise is currently largely unrealized. We present a method that is designed to capture a new speaker from a short untranscribed audio sample. This is done by employing an additional network that given an audio sample, places the speaker in the embedding space. This network is trained as part of the speech synthesis system using various consistency losses. Our results demonstrate a greatly improved performance on both the dataset speakers, and, more importantly, when fitting new voices, even from very short samples.} }
Endnote
%0 Conference Paper %T Fitting New Speakers Based on a Short Untranscribed Sample %A Eliya Nachmani %A Adam Polyak %A Yaniv Taigman %A Lior Wolf %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-nachmani18a %I PMLR %P 3683--3691 %U https://proceedings.mlr.press/v80/nachmani18a.html %V 80 %X Learning-based Text To Speech systems have the potential to generalize from one speaker to the next and thus require a relatively short sample of any new voice. However, this promise is currently largely unrealized. We present a method that is designed to capture a new speaker from a short untranscribed audio sample. This is done by employing an additional network that given an audio sample, places the speaker in the embedding space. This network is trained as part of the speech synthesis system using various consistency losses. Our results demonstrate a greatly improved performance on both the dataset speakers, and, more importantly, when fitting new voices, even from very short samples.
APA
Nachmani, E., Polyak, A., Taigman, Y. & Wolf, L.. (2018). Fitting New Speakers Based on a Short Untranscribed Sample. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3683-3691 Available from https://proceedings.mlr.press/v80/nachmani18a.html.

Related Material