CLAR: Contrastive Learning of Auditory Representations

Haider Al-Tahan, Yalda Mohsenzadeh
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2530-2538, 2021.

Abstract

Learning rich visual representations using contrastive self-supervised learning has been extremely successful. However, it is still a major question whether we could use a similar approach to learn superior auditory representations. In this paper, we expand on prior work (SimCLR) to learn better auditory representations. We (1) introduce various data augmentations suitable for auditory data and evaluate their impact on predictive performance, (2) show that training with time-frequency audio features substantially improves the quality of the learned representations compared to raw signals, and (3) demonstrate that training with both supervised and contrastive losses simultaneously improves the learned representations compared to self-supervised pre-training followed by supervised fine-tuning. We illustrate that by combining all these methods and with substantially less labeled data, our framework (CLAR) achieves significant improvement on prediction performance compared to supervised approach. Moreover, compared to self-supervised approach, our framework converges faster with significantly better representations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-al-tahan21a, title = { CLAR: Contrastive Learning of Auditory Representations }, author = {Al-Tahan, Haider and Mohsenzadeh, Yalda}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2530--2538}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/al-tahan21a/al-tahan21a.pdf}, url = {https://proceedings.mlr.press/v130/al-tahan21a.html}, abstract = { Learning rich visual representations using contrastive self-supervised learning has been extremely successful. However, it is still a major question whether we could use a similar approach to learn superior auditory representations. In this paper, we expand on prior work (SimCLR) to learn better auditory representations. We (1) introduce various data augmentations suitable for auditory data and evaluate their impact on predictive performance, (2) show that training with time-frequency audio features substantially improves the quality of the learned representations compared to raw signals, and (3) demonstrate that training with both supervised and contrastive losses simultaneously improves the learned representations compared to self-supervised pre-training followed by supervised fine-tuning. We illustrate that by combining all these methods and with substantially less labeled data, our framework (CLAR) achieves significant improvement on prediction performance compared to supervised approach. Moreover, compared to self-supervised approach, our framework converges faster with significantly better representations. } }
Endnote
%0 Conference Paper %T CLAR: Contrastive Learning of Auditory Representations %A Haider Al-Tahan %A Yalda Mohsenzadeh %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-al-tahan21a %I PMLR %P 2530--2538 %U https://proceedings.mlr.press/v130/al-tahan21a.html %V 130 %X Learning rich visual representations using contrastive self-supervised learning has been extremely successful. However, it is still a major question whether we could use a similar approach to learn superior auditory representations. In this paper, we expand on prior work (SimCLR) to learn better auditory representations. We (1) introduce various data augmentations suitable for auditory data and evaluate their impact on predictive performance, (2) show that training with time-frequency audio features substantially improves the quality of the learned representations compared to raw signals, and (3) demonstrate that training with both supervised and contrastive losses simultaneously improves the learned representations compared to self-supervised pre-training followed by supervised fine-tuning. We illustrate that by combining all these methods and with substantially less labeled data, our framework (CLAR) achieves significant improvement on prediction performance compared to supervised approach. Moreover, compared to self-supervised approach, our framework converges faster with significantly better representations.
APA
Al-Tahan, H. & Mohsenzadeh, Y.. (2021). CLAR: Contrastive Learning of Auditory Representations . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2530-2538 Available from https://proceedings.mlr.press/v130/al-tahan21a.html.

Related Material