MusicFlow: Cascaded Flow Matching for Text Guided Music Generation

K R Prajwal, Bowen Shi, Matthew Le, Apoorv Vyas, Andros Tjandra, Mahi Luthra, Baishan Guo, Huiyu Wang, Triantafyllos Afouras, David Kant, Wei-Ning Hsu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:41052-41063, 2024.

Abstract

We introduce MusicFlow, a cascaded text-to-music generation model based on flow matching. Based on self-supervised representations to bridge between text descriptions and music audios, we construct two flow matching networks to model the conditional distribution of semantic and acoustic features. Additionally, we leverage masked prediction as the training objective, enabling the model to generalize to other tasks such as music infilling and continuation in a zero-shot manner. Experiments on MusicCaps reveal that the music generated by MusicFlow exhibits superior quality and text coherence despite being over $2\sim5$ times smaller and requiring $5$ times fewer iterative steps. Simultaneously, the model can perform other music generation tasks and achieves competitive performance in music infilling and continuation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-prajwal24a, title = {{M}usic{F}low: Cascaded Flow Matching for Text Guided Music Generation}, author = {Prajwal, K R and Shi, Bowen and Le, Matthew and Vyas, Apoorv and Tjandra, Andros and Luthra, Mahi and Guo, Baishan and Wang, Huiyu and Afouras, Triantafyllos and Kant, David and Hsu, Wei-Ning}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {41052--41063}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/prajwal24a/prajwal24a.pdf}, url = {https://proceedings.mlr.press/v235/prajwal24a.html}, abstract = {We introduce MusicFlow, a cascaded text-to-music generation model based on flow matching. Based on self-supervised representations to bridge between text descriptions and music audios, we construct two flow matching networks to model the conditional distribution of semantic and acoustic features. Additionally, we leverage masked prediction as the training objective, enabling the model to generalize to other tasks such as music infilling and continuation in a zero-shot manner. Experiments on MusicCaps reveal that the music generated by MusicFlow exhibits superior quality and text coherence despite being over $2\sim5$ times smaller and requiring $5$ times fewer iterative steps. Simultaneously, the model can perform other music generation tasks and achieves competitive performance in music infilling and continuation.} }
Endnote
%0 Conference Paper %T MusicFlow: Cascaded Flow Matching for Text Guided Music Generation %A K R Prajwal %A Bowen Shi %A Matthew Le %A Apoorv Vyas %A Andros Tjandra %A Mahi Luthra %A Baishan Guo %A Huiyu Wang %A Triantafyllos Afouras %A David Kant %A Wei-Ning Hsu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-prajwal24a %I PMLR %P 41052--41063 %U https://proceedings.mlr.press/v235/prajwal24a.html %V 235 %X We introduce MusicFlow, a cascaded text-to-music generation model based on flow matching. Based on self-supervised representations to bridge between text descriptions and music audios, we construct two flow matching networks to model the conditional distribution of semantic and acoustic features. Additionally, we leverage masked prediction as the training objective, enabling the model to generalize to other tasks such as music infilling and continuation in a zero-shot manner. Experiments on MusicCaps reveal that the music generated by MusicFlow exhibits superior quality and text coherence despite being over $2\sim5$ times smaller and requiring $5$ times fewer iterative steps. Simultaneously, the model can perform other music generation tasks and achieves competitive performance in music infilling and continuation.
APA
Prajwal, K.R., Shi, B., Le, M., Vyas, A., Tjandra, A., Luthra, M., Guo, B., Wang, H., Afouras, T., Kant, D. & Hsu, W.. (2024). MusicFlow: Cascaded Flow Matching for Text Guided Music Generation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:41052-41063 Available from https://proceedings.mlr.press/v235/prajwal24a.html.

Related Material