Adaptive Accompaniment with ReaLchords

Yusong Wu, Tim Cooijmans, Kyle Kastner, Adam Roberts, Ian Simon, Alexander Scarlatos, Chris Donahue, Cassie Tarakajian, Shayegan Omidshafiei, Aaron Courville, Pablo Samuel Castro, Natasha Jaques, Cheng-Zhi Anna Huang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:53328-53345, 2024.

Abstract

Jamming requires coordination, anticipation, and collaborative creativity between musicians. Current generative models of music produce expressive output but are not able to generate in an online manner, meaning simultaneously with other musicians (human or otherwise). We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody. We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use. The finetuning objective leverages both a novel reward model that provides feedback on both harmonic and temporal coherency between melody and chord, and a divergence term that implements a novel type of distillation from a teacher model that can see the future melody. Through quantitative experiments and listening tests, we demonstrate that the resulting model adapts well to unfamiliar input and produce fitting accompaniment. ReaLchords opens the door to live jamming, as well as simultaneous co-creation in other modalities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wu24c, title = {Adaptive Accompaniment with {R}ea{L}chords}, author = {Wu, Yusong and Cooijmans, Tim and Kastner, Kyle and Roberts, Adam and Simon, Ian and Scarlatos, Alexander and Donahue, Chris and Tarakajian, Cassie and Omidshafiei, Shayegan and Courville, Aaron and Castro, Pablo Samuel and Jaques, Natasha and Huang, Cheng-Zhi Anna}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {53328--53345}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wu24c/wu24c.pdf}, url = {https://proceedings.mlr.press/v235/wu24c.html}, abstract = {Jamming requires coordination, anticipation, and collaborative creativity between musicians. Current generative models of music produce expressive output but are not able to generate in an online manner, meaning simultaneously with other musicians (human or otherwise). We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody. We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use. The finetuning objective leverages both a novel reward model that provides feedback on both harmonic and temporal coherency between melody and chord, and a divergence term that implements a novel type of distillation from a teacher model that can see the future melody. Through quantitative experiments and listening tests, we demonstrate that the resulting model adapts well to unfamiliar input and produce fitting accompaniment. ReaLchords opens the door to live jamming, as well as simultaneous co-creation in other modalities.} }
Endnote
%0 Conference Paper %T Adaptive Accompaniment with ReaLchords %A Yusong Wu %A Tim Cooijmans %A Kyle Kastner %A Adam Roberts %A Ian Simon %A Alexander Scarlatos %A Chris Donahue %A Cassie Tarakajian %A Shayegan Omidshafiei %A Aaron Courville %A Pablo Samuel Castro %A Natasha Jaques %A Cheng-Zhi Anna Huang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wu24c %I PMLR %P 53328--53345 %U https://proceedings.mlr.press/v235/wu24c.html %V 235 %X Jamming requires coordination, anticipation, and collaborative creativity between musicians. Current generative models of music produce expressive output but are not able to generate in an online manner, meaning simultaneously with other musicians (human or otherwise). We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody. We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use. The finetuning objective leverages both a novel reward model that provides feedback on both harmonic and temporal coherency between melody and chord, and a divergence term that implements a novel type of distillation from a teacher model that can see the future melody. Through quantitative experiments and listening tests, we demonstrate that the resulting model adapts well to unfamiliar input and produce fitting accompaniment. ReaLchords opens the door to live jamming, as well as simultaneous co-creation in other modalities.
APA
Wu, Y., Cooijmans, T., Kastner, K., Roberts, A., Simon, I., Scarlatos, A., Donahue, C., Tarakajian, C., Omidshafiei, S., Courville, A., Castro, P.S., Jaques, N. & Huang, C.A.. (2024). Adaptive Accompaniment with ReaLchords. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:53328-53345 Available from https://proceedings.mlr.press/v235/wu24c.html.

Related Material