Versatile Loco-Manipulation through Flexible Interlimb Coordination

Xinghao Zhu, Yuxin Chen, Lingfeng Sun, Farzad Niroui, Simon Le Cleac’h, Jiuguang Wang, Kuan Fang
Proceedings of The 9th Conference on Robot Learning, PMLR 305:610-632, 2025.

Abstract

The ability to flexibly leverage limbs for loco-manipulation is essential for enabling autonomous robots to operate in unstructured environments. Yet, prior work on loco-manipulation is often constrained to specific tasks or predetermined limb configurations. In this work, we present einforcement Learning for Interlimb Coordination (ReLIC), an approach that enables versatile loco-manipulation through flexible interlimb coordination. The key to our approach is an adaptive controller that seamlessly bridges the execution of manipulation motions and the generation of stable gaits based on task demands. Through the interplay between two controller modules, ReLIC dynamically assigns each limb for manipulation or locomotion and robustly coordinates them to achieve the task success. Using efficient reinforcement learning in simulation, ReLIC learns to perform stable gaits in accordance with the manipulation goals in the real world. To solve diverse and complex tasks, we further propose to interface the learned controller with different types of task specifications, including target trajectories, contact points, and natural language instructions. Evaluated on 12 real-world tasks that require diverse and complex coordination patterns, ReLIC demonstrates its versatility and robustness by achieving a success rate of 78.9% on average.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-zhu25a, title = {Versatile Loco-Manipulation through Flexible Interlimb Coordination}, author = {Zhu, Xinghao and Chen, Yuxin and Sun, Lingfeng and Niroui, Farzad and Cleac'h, Simon Le and Wang, Jiuguang and Fang, Kuan}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {610--632}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/zhu25a/zhu25a.pdf}, url = {https://proceedings.mlr.press/v305/zhu25a.html}, abstract = {The ability to flexibly leverage limbs for loco-manipulation is essential for enabling autonomous robots to operate in unstructured environments. Yet, prior work on loco-manipulation is often constrained to specific tasks or predetermined limb configurations. In this work, we present einforcement Learning for Interlimb Coordination (ReLIC), an approach that enables versatile loco-manipulation through flexible interlimb coordination. The key to our approach is an adaptive controller that seamlessly bridges the execution of manipulation motions and the generation of stable gaits based on task demands. Through the interplay between two controller modules, ReLIC dynamically assigns each limb for manipulation or locomotion and robustly coordinates them to achieve the task success. Using efficient reinforcement learning in simulation, ReLIC learns to perform stable gaits in accordance with the manipulation goals in the real world. To solve diverse and complex tasks, we further propose to interface the learned controller with different types of task specifications, including target trajectories, contact points, and natural language instructions. Evaluated on 12 real-world tasks that require diverse and complex coordination patterns, ReLIC demonstrates its versatility and robustness by achieving a success rate of 78.9% on average.} }
Endnote
%0 Conference Paper %T Versatile Loco-Manipulation through Flexible Interlimb Coordination %A Xinghao Zhu %A Yuxin Chen %A Lingfeng Sun %A Farzad Niroui %A Simon Le Cleac’h %A Jiuguang Wang %A Kuan Fang %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-zhu25a %I PMLR %P 610--632 %U https://proceedings.mlr.press/v305/zhu25a.html %V 305 %X The ability to flexibly leverage limbs for loco-manipulation is essential for enabling autonomous robots to operate in unstructured environments. Yet, prior work on loco-manipulation is often constrained to specific tasks or predetermined limb configurations. In this work, we present einforcement Learning for Interlimb Coordination (ReLIC), an approach that enables versatile loco-manipulation through flexible interlimb coordination. The key to our approach is an adaptive controller that seamlessly bridges the execution of manipulation motions and the generation of stable gaits based on task demands. Through the interplay between two controller modules, ReLIC dynamically assigns each limb for manipulation or locomotion and robustly coordinates them to achieve the task success. Using efficient reinforcement learning in simulation, ReLIC learns to perform stable gaits in accordance with the manipulation goals in the real world. To solve diverse and complex tasks, we further propose to interface the learned controller with different types of task specifications, including target trajectories, contact points, and natural language instructions. Evaluated on 12 real-world tasks that require diverse and complex coordination patterns, ReLIC demonstrates its versatility and robustness by achieving a success rate of 78.9% on average.
APA
Zhu, X., Chen, Y., Sun, L., Niroui, F., Cleac’h, S.L., Wang, J. & Fang, K.. (2025). Versatile Loco-Manipulation through Flexible Interlimb Coordination. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:610-632 Available from https://proceedings.mlr.press/v305/zhu25a.html.

Related Material