The Sound of Simulation: Learning Multimodal Sim-to-Real Robot Policies with Generative Audio

Renhao Wang, Haoran Geng, Tingle Li, Philipp Wu, Feishi Wang, Gopala Anumanchipalli, Trevor Darrell, Boyi Li, Pieter Abbeel, Jitendra Malik, Alexei A Efros
Proceedings of The 9th Conference on Robot Learning, PMLR 305:420-436, 2025.

Abstract

Robots must integrate multiple sensory modalities to act effectively in the real world. Yet, learning such multimodal policies at scale remains challenging. Simulation offers a viable solution, but while vision has benefited from high-fidelity simulators, other modalities (e.g. sound) can be notoriously difficult to simulate. As a result, sim-to-real transfer has succeeded primarily in vision-based tasks, with multimodal transfer still largely unrealized. In this work, we tackle these challenges by introducing MultiGen, a framework that integrates large-scale generative models into traditional physics simulators, enabling multisensory simulation. We showcase our framework on the dynamic task of robot pouring, which inherently relies on multimodal feedback. By synthesizing realistic audio conditioned on simulation video, our method enables training on rich audiovisual trajectories—without any real robot data. We demonstrate effective zero-shot transfer to real-world pouring with novel containers and liquids, highlighting the potential of generative modeling to both simulate hard-to-model modalities and close the multimodal sim-to-real gap.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-wang25a, title = {The Sound of Simulation: Learning Multimodal Sim-to-Real Robot Policies with Generative Audio}, author = {Wang, Renhao and Geng, Haoran and Li, Tingle and Wu, Philipp and Wang, Feishi and Anumanchipalli, Gopala and Darrell, Trevor and Li, Boyi and Abbeel, Pieter and Malik, Jitendra and Efros, Alexei A}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {420--436}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/wang25a/wang25a.pdf}, url = {https://proceedings.mlr.press/v305/wang25a.html}, abstract = {Robots must integrate multiple sensory modalities to act effectively in the real world. Yet, learning such multimodal policies at scale remains challenging. Simulation offers a viable solution, but while vision has benefited from high-fidelity simulators, other modalities (e.g. sound) can be notoriously difficult to simulate. As a result, sim-to-real transfer has succeeded primarily in vision-based tasks, with multimodal transfer still largely unrealized. In this work, we tackle these challenges by introducing MultiGen, a framework that integrates large-scale generative models into traditional physics simulators, enabling multisensory simulation. We showcase our framework on the dynamic task of robot pouring, which inherently relies on multimodal feedback. By synthesizing realistic audio conditioned on simulation video, our method enables training on rich audiovisual trajectories—without any real robot data. We demonstrate effective zero-shot transfer to real-world pouring with novel containers and liquids, highlighting the potential of generative modeling to both simulate hard-to-model modalities and close the multimodal sim-to-real gap.} }
Endnote
%0 Conference Paper %T The Sound of Simulation: Learning Multimodal Sim-to-Real Robot Policies with Generative Audio %A Renhao Wang %A Haoran Geng %A Tingle Li %A Philipp Wu %A Feishi Wang %A Gopala Anumanchipalli %A Trevor Darrell %A Boyi Li %A Pieter Abbeel %A Jitendra Malik %A Alexei A Efros %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-wang25a %I PMLR %P 420--436 %U https://proceedings.mlr.press/v305/wang25a.html %V 305 %X Robots must integrate multiple sensory modalities to act effectively in the real world. Yet, learning such multimodal policies at scale remains challenging. Simulation offers a viable solution, but while vision has benefited from high-fidelity simulators, other modalities (e.g. sound) can be notoriously difficult to simulate. As a result, sim-to-real transfer has succeeded primarily in vision-based tasks, with multimodal transfer still largely unrealized. In this work, we tackle these challenges by introducing MultiGen, a framework that integrates large-scale generative models into traditional physics simulators, enabling multisensory simulation. We showcase our framework on the dynamic task of robot pouring, which inherently relies on multimodal feedback. By synthesizing realistic audio conditioned on simulation video, our method enables training on rich audiovisual trajectories—without any real robot data. We demonstrate effective zero-shot transfer to real-world pouring with novel containers and liquids, highlighting the potential of generative modeling to both simulate hard-to-model modalities and close the multimodal sim-to-real gap.
APA
Wang, R., Geng, H., Li, T., Wu, P., Wang, F., Anumanchipalli, G., Darrell, T., Li, B., Abbeel, P., Malik, J. & Efros, A.A.. (2025). The Sound of Simulation: Learning Multimodal Sim-to-Real Robot Policies with Generative Audio. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:420-436 Available from https://proceedings.mlr.press/v305/wang25a.html.

Related Material