Parallel WaveNet: Fast High-Fidelity Speech Synthesis

Aaron Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George Driessche, Edward Lockhart, Luis Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3918-3926, 2018.

Abstract

The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-oord18a, title = {Parallel {W}ave{N}et: Fast High-Fidelity Speech Synthesis}, author = {van den Oord, Aaron and Li, Yazhe and Babuschkin, Igor and Simonyan, Karen and Vinyals, Oriol and Kavukcuoglu, Koray and van den Driessche, George and Lockhart, Edward and Cobo, Luis and Stimberg, Florian and Casagrande, Norman and Grewe, Dominik and Noury, Seb and Dieleman, Sander and Elsen, Erich and Kalchbrenner, Nal and Zen, Heiga and Graves, Alex and King, Helen and Walters, Tom and Belov, Dan and Hassabis, Demis}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3918--3926}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/oord18a/oord18a.pdf}, url = {https://proceedings.mlr.press/v80/oord18a.html}, abstract = {The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.} }
Endnote
%0 Conference Paper %T Parallel WaveNet: Fast High-Fidelity Speech Synthesis %A Aaron Oord %A Yazhe Li %A Igor Babuschkin %A Karen Simonyan %A Oriol Vinyals %A Koray Kavukcuoglu %A George Driessche %A Edward Lockhart %A Luis Cobo %A Florian Stimberg %A Norman Casagrande %A Dominik Grewe %A Seb Noury %A Sander Dieleman %A Erich Elsen %A Nal Kalchbrenner %A Heiga Zen %A Alex Graves %A Helen King %A Tom Walters %A Dan Belov %A Demis Hassabis %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-oord18a %I PMLR %P 3918--3926 %U https://proceedings.mlr.press/v80/oord18a.html %V 80 %X The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.
APA
Oord, A., Li, Y., Babuschkin, I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., Driessche, G., Lockhart, E., Cobo, L., Stimberg, F., Casagrande, N., Grewe, D., Noury, S., Dieleman, S., Elsen, E., Kalchbrenner, N., Zen, H., Graves, A., King, H., Walters, T., Belov, D. & Hassabis, D.. (2018). Parallel WaveNet: Fast High-Fidelity Speech Synthesis. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3918-3926 Available from https://proceedings.mlr.press/v80/oord18a.html.

Related Material