The Efficacy of Self-Supervised Speech Models for Audio Representations

Tung-Yu Wu, Tsu-Yuan Hsu, Chen-An Li, Tzu-Han Lin, Hung-yi Lee
HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition), PMLR 166:90-110, 2022.

Abstract

Self-supervised learning (SSL) speech models, which can serve as powerful upstream models to extract meaningful speech representations, have achieved unprecedented success in speech representation learning. However, their effectiveness on non-speech datasets is relatively less explored. In this work, we propose an ensemble framework, with a combination of ensemble techniques, to fuse SSL speech models' embeddings. Extensive experiments on speech and non-speech audio datasets are conducted to investigate the representation abilities of our ensemble method and its single constituent model. Ablation studies are carried out to evaluate the performances of different ensemble techniques, such as feature averaging and concatenation. All experiments are conducted during NeurIPS 2021 HEAR Challenge as a standard evaluation pipeline provided by competition officials. Results demonstrate SSL speech models' strong abilities on various non-speech tasks, while we also note that they fail to deal with fine-grained music tasks, such as pitch classification and note onset detection. In addition, feature ensemble is shown to have great potential on producing more holistic representations, as our proposed framework generally surpasses state-of-the-art SSL speech/audio models and has superior performance on various datasets compared with other teams in HEAR Challenge. Our code is available at https://github.com/tony10101105/HEAR-2021-NeurIPS-Challenge—NTU-GURA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v166-wu22a, title = {The Ability of Self-Supervised Speech Models for Audio Representations}, author = {Wu, Tung-Yu and Hsu, Tsu-Yuan and Li, Chen-An and Lin, Tzu-Han and Lee, Hung-yi}, booktitle = {HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition)}, pages = {90--110}, year = {2022}, editor = {Turian, Joseph and Schuller, Björn W. and Herremans, Dorien and Kirchoff, Katrin and Perera, Paola Garcia and Esling, Philippe}, volume = {166}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v166/wu22a/wu22a.pdf}, url = {https://proceedings.mlr.press/v166/wu22a.html}, abstract = {Self-supervised learning (SSL) speech models, which can serve as powerful upstream models to extract meaningful speech representations, have achieved unprecedented success in speech representation learning. However, their effectiveness on non-speech datasets is relatively less explored. In this work, we propose an ensemble framework, with a combination of ensemble techniques, to fuse SSL speech models' embeddings. Extensive experiments on speech and non-speech audio datasets are conducted to investigate the representation abilities of our ensemble method and its single constituent model. Ablation studies are carried out to evaluate the performances of different ensemble techniques, such as feature averaging and concatenation. All experiments are conducted during NeurIPS 2021 HEAR Challenge as a standard evaluation pipeline provided by competition officials. Results demonstrate SSL speech models' strong abilities on various non-speech tasks, while we also note that they fail to deal with fine-grained music tasks, such as pitch classification and note onset detection. In addition, feature ensemble is shown to have great potential on producing more holistic representations, as our proposed framework generally surpasses state-of-the-art SSL speech/audio models and has superior performance on various datasets compared with other teams in HEAR Challenge. Our code is available at https://github.com/tony10101105/HEAR-2021-NeurIPS-Challenge—NTU-GURA.} }
Endnote
%0 Conference Paper %T The Efficacy of Self-Supervised Speech Models for Audio Representations %A Tung-Yu Wu %A Tsu-Yuan Hsu %A Chen-An Li %A Tzu-Han Lin %A Hung-yi Lee %B HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition) %C Proceedings of Machine Learning Research %D 2022 %E Joseph Turian %E Björn W. Schuller %E Dorien Herremans %E Katrin Kirchoff %E Paola Garcia Perera %E Philippe Esling %F pmlr-v166-wu22a %I PMLR %P 90--110 %U https://proceedings.mlr.press/v166/wu22a.html %V 166 %X Self-supervised learning (SSL) speech models, which can serve as powerful upstream models to extract meaningful speech representations, have achieved unprecedented success in speech representation learning. However, their effectiveness on non-speech datasets is relatively less explored. In this work, we propose an ensemble framework, with a combination of ensemble techniques, to fuse SSL speech models' embeddings. Extensive experiments on speech and non-speech audio datasets are conducted to investigate the representation abilities of our ensemble method and its single constituent model. Ablation studies are carried out to evaluate the performances of different ensemble techniques, such as feature averaging and concatenation. All experiments are conducted during NeurIPS 2021 HEAR Challenge as a standard evaluation pipeline provided by competition officials. Results demonstrate SSL speech models' strong abilities on various non-speech tasks, while we also note that they fail to deal with fine-grained music tasks, such as pitch classification and note onset detection. In addition, feature ensemble is shown to have great potential on producing more holistic representations, as our proposed framework generally surpasses state-of-the-art SSL speech/audio models and has superior performance on various datasets compared with other teams in HEAR Challenge. Our code is available at https://github.com/tony10101105/HEAR-2021-NeurIPS-Challenge—NTU-GURA.
APA
Wu, T., Hsu, T., Li, C., Lin, T. & Lee, H.. (2022). The Efficacy of Self-Supervised Speech Models for Audio Representations. HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition), in Proceedings of Machine Learning Research 166:90-110 Available from https://proceedings.mlr.press/v166/wu22a.html.

Related Material