Supervised Contrastive Learning from Weakly-Labeled Audio Segments for Musical Version Matching

Joan Serrà, R. Oguz Araz, Dmitry Bogdanov, Yuki Mitsufuji
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:53923-53939, 2025.

Abstract

Detecting musical versions (different renditions of the same piece) is a challenging task with important applications. Because of the ground truth nature, existing approaches match musical versions at the track level (e.g., whole song). However, most applications require to match them at the segment level (e.g., 20s chunks). In addition, existing approaches resort to classification and triplet losses, disregarding more recent losses that could bring meaningful improvements. In this paper, we propose a method to learn from weakly annotated segments, together with a contrastive loss variant that outperforms well-studied alternatives. The former is based on pairwise segment distance reductions, while the latter modifies an existing loss following decoupling, hyper-parameter, and geometric considerations. With these two elements, we do not only achieve state-of-the-art results in the standard track-level evaluation, but we also obtain a breakthrough performance in a segment-level evaluation. We believe that, due to the generality of the challenges addressed here, the proposed methods may find utility in domains beyond audio or musical version matching.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-serra25a, title = {Supervised Contrastive Learning from Weakly-Labeled Audio Segments for Musical Version Matching}, author = {Serr\`{a}, Joan and Araz, R. Oguz and Bogdanov, Dmitry and Mitsufuji, Yuki}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {53923--53939}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/serra25a/serra25a.pdf}, url = {https://proceedings.mlr.press/v267/serra25a.html}, abstract = {Detecting musical versions (different renditions of the same piece) is a challenging task with important applications. Because of the ground truth nature, existing approaches match musical versions at the track level (e.g., whole song). However, most applications require to match them at the segment level (e.g., 20s chunks). In addition, existing approaches resort to classification and triplet losses, disregarding more recent losses that could bring meaningful improvements. In this paper, we propose a method to learn from weakly annotated segments, together with a contrastive loss variant that outperforms well-studied alternatives. The former is based on pairwise segment distance reductions, while the latter modifies an existing loss following decoupling, hyper-parameter, and geometric considerations. With these two elements, we do not only achieve state-of-the-art results in the standard track-level evaluation, but we also obtain a breakthrough performance in a segment-level evaluation. We believe that, due to the generality of the challenges addressed here, the proposed methods may find utility in domains beyond audio or musical version matching.} }
Endnote
%0 Conference Paper %T Supervised Contrastive Learning from Weakly-Labeled Audio Segments for Musical Version Matching %A Joan Serrà %A R. Oguz Araz %A Dmitry Bogdanov %A Yuki Mitsufuji %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-serra25a %I PMLR %P 53923--53939 %U https://proceedings.mlr.press/v267/serra25a.html %V 267 %X Detecting musical versions (different renditions of the same piece) is a challenging task with important applications. Because of the ground truth nature, existing approaches match musical versions at the track level (e.g., whole song). However, most applications require to match them at the segment level (e.g., 20s chunks). In addition, existing approaches resort to classification and triplet losses, disregarding more recent losses that could bring meaningful improvements. In this paper, we propose a method to learn from weakly annotated segments, together with a contrastive loss variant that outperforms well-studied alternatives. The former is based on pairwise segment distance reductions, while the latter modifies an existing loss following decoupling, hyper-parameter, and geometric considerations. With these two elements, we do not only achieve state-of-the-art results in the standard track-level evaluation, but we also obtain a breakthrough performance in a segment-level evaluation. We believe that, due to the generality of the challenges addressed here, the proposed methods may find utility in domains beyond audio or musical version matching.
APA
Serrà, J., Araz, R.O., Bogdanov, D. & Mitsufuji, Y.. (2025). Supervised Contrastive Learning from Weakly-Labeled Audio Segments for Musical Version Matching. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:53923-53939 Available from https://proceedings.mlr.press/v267/serra25a.html.

Related Material