Flow as the Cross-domain Manipulation Interface

Mengda Xu, Zhenjia Xu, Yinghao Xu, Cheng Chi, Gordon Wetzstein, Manuela Veloso, Shuran Song
Proceedings of The 8th Conference on Robot Learning, PMLR 270:2475-2499, 2025.

Abstract

We present Im2Flow2Act, a scalable learning framework that enables robots to acquire real-world manipulation skills without the need of real-world robot training data. The key idea behind Im2Flow2Act is to use object flow as the manipulation interface, bridging domain gaps between different embodiments (i.e., human and robot) and training environments (i.e., real-world and simulated). Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy. The flow generation network, trained on human demonstration videos, generates object flow from the initial scene image, conditioned on the task description. The flow-conditioned policy, trained on simulated robot play data, maps the generated object flow to robot actions to realize the desired object movements. By using flow as input, this policy can be directly deployed in the real world with a minimal sim-to-real gap. By leveraging real-world human videos and simulated robot play data, we bypass the challenges of teleoperating physical robots in the real world, resulting in a scalable system for diverse tasks. We demonstrate Im2Flow2Act’s capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-xu25a, title = {Flow as the Cross-domain Manipulation Interface}, author = {Xu, Mengda and Xu, Zhenjia and Xu, Yinghao and Chi, Cheng and Wetzstein, Gordon and Veloso, Manuela and Song, Shuran}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {2475--2499}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/xu25a/xu25a.pdf}, url = {https://proceedings.mlr.press/v270/xu25a.html}, abstract = {We present Im2Flow2Act, a scalable learning framework that enables robots to acquire real-world manipulation skills without the need of real-world robot training data. The key idea behind Im2Flow2Act is to use object flow as the manipulation interface, bridging domain gaps between different embodiments (i.e., human and robot) and training environments (i.e., real-world and simulated). Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy. The flow generation network, trained on human demonstration videos, generates object flow from the initial scene image, conditioned on the task description. The flow-conditioned policy, trained on simulated robot play data, maps the generated object flow to robot actions to realize the desired object movements. By using flow as input, this policy can be directly deployed in the real world with a minimal sim-to-real gap. By leveraging real-world human videos and simulated robot play data, we bypass the challenges of teleoperating physical robots in the real world, resulting in a scalable system for diverse tasks. We demonstrate Im2Flow2Act’s capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.} }
Endnote
%0 Conference Paper %T Flow as the Cross-domain Manipulation Interface %A Mengda Xu %A Zhenjia Xu %A Yinghao Xu %A Cheng Chi %A Gordon Wetzstein %A Manuela Veloso %A Shuran Song %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-xu25a %I PMLR %P 2475--2499 %U https://proceedings.mlr.press/v270/xu25a.html %V 270 %X We present Im2Flow2Act, a scalable learning framework that enables robots to acquire real-world manipulation skills without the need of real-world robot training data. The key idea behind Im2Flow2Act is to use object flow as the manipulation interface, bridging domain gaps between different embodiments (i.e., human and robot) and training environments (i.e., real-world and simulated). Im2Flow2Act comprises two components: a flow generation network and a flow-conditioned policy. The flow generation network, trained on human demonstration videos, generates object flow from the initial scene image, conditioned on the task description. The flow-conditioned policy, trained on simulated robot play data, maps the generated object flow to robot actions to realize the desired object movements. By using flow as input, this policy can be directly deployed in the real world with a minimal sim-to-real gap. By leveraging real-world human videos and simulated robot play data, we bypass the challenges of teleoperating physical robots in the real world, resulting in a scalable system for diverse tasks. We demonstrate Im2Flow2Act’s capabilities in a variety of real-world tasks, including the manipulation of rigid, articulated, and deformable objects.
APA
Xu, M., Xu, Z., Xu, Y., Chi, C., Wetzstein, G., Veloso, M. & Song, S.. (2025). Flow as the Cross-domain Manipulation Interface. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:2475-2499 Available from https://proceedings.mlr.press/v270/xu25a.html.

Related Material