Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation

Peiyuan Zhi, Peiyang Li, Jianqin Yin, Baoxiong Jia, Siyuan Huang
Proceedings of The 9th Conference on Robot Learning, PMLR 305:652-669, 2025.

Abstract

Robotic loco-manipulation tasks often involve contact-rich interactions with the environment, requiring the joint modeling of contact force and robot position. However, recent visuomotor policies often focus solely on position or force control, overlooking their integration. In this work, we propose a unified policy for legged robots that jointly models force and position control learned without reliance on force sensors. By simulating diverse combinations of active position and force commands alongside external disturbances force, we use reinforcement learning to learn a policy that estimates forces from the robot’s historical states and compensates for them through position and velocity adjustments. Such a policy enables a wide range of manipulation behaviors under varying combinations of force and position inputs, including position tracking, force application, force tracking, and compliant robot behaviors. Additionally, we demonstrate that the learned policy enhances trajectory-based imitation learning pipelines by incorporating essential contact information through its force estimation module, achieving approximately  39.5% higher success rates across four challenging contact-rich manipulation tasks compared to position-control policies. Extensive experiments on both a quadrupedal mobile manipulation platform and a humanoid validate the versatility and robustness of the proposed policy across diverse scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-zhi25a, title = {Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation}, author = {Zhi, Peiyuan and Li, Peiyang and Yin, Jianqin and Jia, Baoxiong and Huang, Siyuan}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {652--669}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/zhi25a/zhi25a.pdf}, url = {https://proceedings.mlr.press/v305/zhi25a.html}, abstract = {Robotic loco-manipulation tasks often involve contact-rich interactions with the environment, requiring the joint modeling of contact force and robot position. However, recent visuomotor policies often focus solely on position or force control, overlooking their integration. In this work, we propose a unified policy for legged robots that jointly models force and position control learned without reliance on force sensors. By simulating diverse combinations of active position and force commands alongside external disturbances force, we use reinforcement learning to learn a policy that estimates forces from the robot’s historical states and compensates for them through position and velocity adjustments. Such a policy enables a wide range of manipulation behaviors under varying combinations of force and position inputs, including position tracking, force application, force tracking, and compliant robot behaviors. Additionally, we demonstrate that the learned policy enhances trajectory-based imitation learning pipelines by incorporating essential contact information through its force estimation module, achieving approximately  39.5% higher success rates across four challenging contact-rich manipulation tasks compared to position-control policies. Extensive experiments on both a quadrupedal mobile manipulation platform and a humanoid validate the versatility and robustness of the proposed policy across diverse scenarios.} }
Endnote
%0 Conference Paper %T Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation %A Peiyuan Zhi %A Peiyang Li %A Jianqin Yin %A Baoxiong Jia %A Siyuan Huang %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-zhi25a %I PMLR %P 652--669 %U https://proceedings.mlr.press/v305/zhi25a.html %V 305 %X Robotic loco-manipulation tasks often involve contact-rich interactions with the environment, requiring the joint modeling of contact force and robot position. However, recent visuomotor policies often focus solely on position or force control, overlooking their integration. In this work, we propose a unified policy for legged robots that jointly models force and position control learned without reliance on force sensors. By simulating diverse combinations of active position and force commands alongside external disturbances force, we use reinforcement learning to learn a policy that estimates forces from the robot’s historical states and compensates for them through position and velocity adjustments. Such a policy enables a wide range of manipulation behaviors under varying combinations of force and position inputs, including position tracking, force application, force tracking, and compliant robot behaviors. Additionally, we demonstrate that the learned policy enhances trajectory-based imitation learning pipelines by incorporating essential contact information through its force estimation module, achieving approximately  39.5% higher success rates across four challenging contact-rich manipulation tasks compared to position-control policies. Extensive experiments on both a quadrupedal mobile manipulation platform and a humanoid validate the versatility and robustness of the proposed policy across diverse scenarios.
APA
Zhi, P., Li, P., Yin, J., Jia, B. & Huang, S.. (2025). Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:652-669 Available from https://proceedings.mlr.press/v305/zhi25a.html.

Related Material