Learning de-identified representations of prosody from raw audio

Jack Weston, Raphael Lenain, Udeepa Meepegama, Emil Fristed
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11134-11145, 2021.

Abstract

We propose a method for learning de-identified prosody representations from raw audio using a contrastive self-supervised signal. Whereas prior work has relied on conditioning models with bottlenecks, we introduce a set of inductive biases that exploit the natural structure of prosody to minimize timbral information and decouple prosody from speaker representations. Despite aggressive downsampling of the input and having no access to linguistic information, our model performs comparably to state-of-the-art speech representations on DAMMP, a new benchmark we introduce for spoken language understanding. We use minimum description length probing to show that our representations have selectively learned the subcomponents of non-timbral prosody, and that the product quantizer naturally disentangles them without using bottlenecks. We derive an information-theoretic definition of speech de-identifiability and use it to demonstrate that our prosody representations are less identifiable than the other speech representations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-weston21a, title = {Learning de-identified representations of prosody from raw audio}, author = {Weston, Jack and Lenain, Raphael and Meepegama, Udeepa and Fristed, Emil}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11134--11145}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/weston21a/weston21a.pdf}, url = {https://proceedings.mlr.press/v139/weston21a.html}, abstract = {We propose a method for learning de-identified prosody representations from raw audio using a contrastive self-supervised signal. Whereas prior work has relied on conditioning models with bottlenecks, we introduce a set of inductive biases that exploit the natural structure of prosody to minimize timbral information and decouple prosody from speaker representations. Despite aggressive downsampling of the input and having no access to linguistic information, our model performs comparably to state-of-the-art speech representations on DAMMP, a new benchmark we introduce for spoken language understanding. We use minimum description length probing to show that our representations have selectively learned the subcomponents of non-timbral prosody, and that the product quantizer naturally disentangles them without using bottlenecks. We derive an information-theoretic definition of speech de-identifiability and use it to demonstrate that our prosody representations are less identifiable than the other speech representations.} }
Endnote
%0 Conference Paper %T Learning de-identified representations of prosody from raw audio %A Jack Weston %A Raphael Lenain %A Udeepa Meepegama %A Emil Fristed %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-weston21a %I PMLR %P 11134--11145 %U https://proceedings.mlr.press/v139/weston21a.html %V 139 %X We propose a method for learning de-identified prosody representations from raw audio using a contrastive self-supervised signal. Whereas prior work has relied on conditioning models with bottlenecks, we introduce a set of inductive biases that exploit the natural structure of prosody to minimize timbral information and decouple prosody from speaker representations. Despite aggressive downsampling of the input and having no access to linguistic information, our model performs comparably to state-of-the-art speech representations on DAMMP, a new benchmark we introduce for spoken language understanding. We use minimum description length probing to show that our representations have selectively learned the subcomponents of non-timbral prosody, and that the product quantizer naturally disentangles them without using bottlenecks. We derive an information-theoretic definition of speech de-identifiability and use it to demonstrate that our prosody representations are less identifiable than the other speech representations.
APA
Weston, J., Lenain, R., Meepegama, U. & Fristed, E.. (2021). Learning de-identified representations of prosody from raw audio. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11134-11145 Available from https://proceedings.mlr.press/v139/weston21a.html.

Related Material