BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping

Gasser Elbanna, Neil Scheidwasser-Clow, Mikolaj Kegler, Pierre Beckmann, Karl El Hajal, Milos Cernak
HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition), PMLR 166:25-47, 2022.

Abstract

Methods for extracting audio and speech features have been studied since pioneering work on spectrum analysis decades ago. Recent efforts are guided by the ambition to develop general-purpose audio representations. For example, deep neural networks can extract optimal embeddings if they are trained on large audio datasets. This work extends existing methods based on self-supervised learning by bootstrapping, proposes various encoder architectures, and explores the effects of using different pre-training datasets. Lastly, we present a novel training framework to come up with a hybrid audio representation, which combines handcrafted and data-driven learned audio features. All the proposed representations were evaluated within the HEAR NeurIPS 2021 challenge for auditory scene classification and timestamp detection tasks. Our results indicate that the hybrid model with a convolutional transformer as the encoder yields superior performance in most HEAR challenge tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v166-elbanna22a, title = {{BYOL-S}: Learning Self-supervised Speech Representations by Bootstrapping}, author = {Elbanna, Gasser and Scheidwasser-Clow, Neil and Kegler, Mikolaj and Beckmann, Pierre and El Hajal, Karl and Cernak, Milos}, booktitle = {HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition)}, pages = {25--47}, year = {2022}, editor = {Turian, Joseph and Schuller, Björn W. and Herremans, Dorien and Kirchoff, Katrin and Perera, Paola Garcia and Esling, Philippe}, volume = {166}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v166/elbanna22a/elbanna22a.pdf}, url = {https://proceedings.mlr.press/v166/elbanna22a.html}, abstract = {Methods for extracting audio and speech features have been studied since pioneering work on spectrum analysis decades ago. Recent efforts are guided by the ambition to develop general-purpose audio representations. For example, deep neural networks can extract optimal embeddings if they are trained on large audio datasets. This work extends existing methods based on self-supervised learning by bootstrapping, proposes various encoder architectures, and explores the effects of using different pre-training datasets. Lastly, we present a novel training framework to come up with a hybrid audio representation, which combines handcrafted and data-driven learned audio features. All the proposed representations were evaluated within the HEAR NeurIPS 2021 challenge for auditory scene classification and timestamp detection tasks. Our results indicate that the hybrid model with a convolutional transformer as the encoder yields superior performance in most HEAR challenge tasks.} }
Endnote
%0 Conference Paper %T BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping %A Gasser Elbanna %A Neil Scheidwasser-Clow %A Mikolaj Kegler %A Pierre Beckmann %A Karl El Hajal %A Milos Cernak %B HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition) %C Proceedings of Machine Learning Research %D 2022 %E Joseph Turian %E Björn W. Schuller %E Dorien Herremans %E Katrin Kirchoff %E Paola Garcia Perera %E Philippe Esling %F pmlr-v166-elbanna22a %I PMLR %P 25--47 %U https://proceedings.mlr.press/v166/elbanna22a.html %V 166 %X Methods for extracting audio and speech features have been studied since pioneering work on spectrum analysis decades ago. Recent efforts are guided by the ambition to develop general-purpose audio representations. For example, deep neural networks can extract optimal embeddings if they are trained on large audio datasets. This work extends existing methods based on self-supervised learning by bootstrapping, proposes various encoder architectures, and explores the effects of using different pre-training datasets. Lastly, we present a novel training framework to come up with a hybrid audio representation, which combines handcrafted and data-driven learned audio features. All the proposed representations were evaluated within the HEAR NeurIPS 2021 challenge for auditory scene classification and timestamp detection tasks. Our results indicate that the hybrid model with a convolutional transformer as the encoder yields superior performance in most HEAR challenge tasks.
APA
Elbanna, G., Scheidwasser-Clow, N., Kegler, M., Beckmann, P., El Hajal, K. & Cernak, M.. (2022). BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping. HEAR: Holistic Evaluation of Audio Representations (NeurIPS 2021 Competition), in Proceedings of Machine Learning Research 166:25-47 Available from https://proceedings.mlr.press/v166/elbanna22a.html.

Related Material