ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers

Kaizhi Qian, Yang Zhang, Heting Gao, Junrui Ni, Cheng-I Lai, David Cox, Mark Hasegawa-Johnson, Shiyu Chang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18003-18017, 2022.

Abstract

Self-supervised learning in speech involves training a speech representation network on a large-scale unannotated speech corpus, and then applying the learned representations to downstream tasks. Since the majority of the downstream tasks of SSL learning in speech largely focus on the content information in speech, the most desirable speech representations should be able to disentangle unwanted variations, such as speaker variations, from the content. However, disentangling speakers is very challenging, because removing the speaker information could easily result in a loss of content as well, and the damage of the latter usually far outweighs the benefit of the former. In this paper, we propose a new SSL method that can achieve speaker disentanglement without severe loss of content. Our approach is adapted from the HuBERT framework, and incorporates disentangling mechanisms to regularize both the teacher labels and the learned representations. We evaluate the benefit of speaker disentanglement on a set of content-related downstream tasks, and observe a consistent and notable performance advantage of our speaker-disentangled representations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-qian22b, title = {{C}ontent{V}ec: An Improved Self-Supervised Speech Representation by Disentangling Speakers}, author = {Qian, Kaizhi and Zhang, Yang and Gao, Heting and Ni, Junrui and Lai, Cheng-I and Cox, David and Hasegawa-Johnson, Mark and Chang, Shiyu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18003--18017}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/qian22b/qian22b.pdf}, url = {https://proceedings.mlr.press/v162/qian22b.html}, abstract = {Self-supervised learning in speech involves training a speech representation network on a large-scale unannotated speech corpus, and then applying the learned representations to downstream tasks. Since the majority of the downstream tasks of SSL learning in speech largely focus on the content information in speech, the most desirable speech representations should be able to disentangle unwanted variations, such as speaker variations, from the content. However, disentangling speakers is very challenging, because removing the speaker information could easily result in a loss of content as well, and the damage of the latter usually far outweighs the benefit of the former. In this paper, we propose a new SSL method that can achieve speaker disentanglement without severe loss of content. Our approach is adapted from the HuBERT framework, and incorporates disentangling mechanisms to regularize both the teacher labels and the learned representations. We evaluate the benefit of speaker disentanglement on a set of content-related downstream tasks, and observe a consistent and notable performance advantage of our speaker-disentangled representations.} }
Endnote
%0 Conference Paper %T ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers %A Kaizhi Qian %A Yang Zhang %A Heting Gao %A Junrui Ni %A Cheng-I Lai %A David Cox %A Mark Hasegawa-Johnson %A Shiyu Chang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-qian22b %I PMLR %P 18003--18017 %U https://proceedings.mlr.press/v162/qian22b.html %V 162 %X Self-supervised learning in speech involves training a speech representation network on a large-scale unannotated speech corpus, and then applying the learned representations to downstream tasks. Since the majority of the downstream tasks of SSL learning in speech largely focus on the content information in speech, the most desirable speech representations should be able to disentangle unwanted variations, such as speaker variations, from the content. However, disentangling speakers is very challenging, because removing the speaker information could easily result in a loss of content as well, and the damage of the latter usually far outweighs the benefit of the former. In this paper, we propose a new SSL method that can achieve speaker disentanglement without severe loss of content. Our approach is adapted from the HuBERT framework, and incorporates disentangling mechanisms to regularize both the teacher labels and the learned representations. We evaluate the benefit of speaker disentanglement on a set of content-related downstream tasks, and observe a consistent and notable performance advantage of our speaker-disentangled representations.
APA
Qian, K., Zhang, Y., Gao, H., Ni, J., Lai, C., Cox, D., Hasegawa-Johnson, M. & Chang, S.. (2022). ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18003-18017 Available from https://proceedings.mlr.press/v162/qian22b.html.

Related Material