Robust Manipulation Primitive Learning via Domain Contraction

Teng Xue, Amirreza Razmjoo, Suhan Shetty, Sylvain Calinon
Proceedings of The 8th Conference on Robot Learning, PMLR 270:794-809, 2025.

Abstract

Contact-rich manipulation plays an important role in everyday life, but uncertain parameters pose significant challenges to model-based planning and control. To address this issue, domain adaptation and domain randomization have been proposed to learn robust policies. However, they either lose the generalization ability to diverse instances or perform conservatively due to neglecting instance-specific information. In this paper, we propose a bi-level approach to learn robust manipulation primitives, including parameter-augmented policy learning using multiple models with tensor approximation, and parameter-conditioned policy retrieval through domain contraction. This approach unifies domain randomization and domain adaptation, providing optimal behaviors while keeping generalization ability. We validate the proposed method on three contact-rich manipulation primitives: hitting, pushing, and reorientation. The experimental results showcase the superior performance of our approach in generating robust policies for instances with diverse physical parameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-xue25a, title = {Robust Manipulation Primitive Learning via Domain Contraction}, author = {Xue, Teng and Razmjoo, Amirreza and Shetty, Suhan and Calinon, Sylvain}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {794--809}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/xue25a/xue25a.pdf}, url = {https://proceedings.mlr.press/v270/xue25a.html}, abstract = {Contact-rich manipulation plays an important role in everyday life, but uncertain parameters pose significant challenges to model-based planning and control. To address this issue, domain adaptation and domain randomization have been proposed to learn robust policies. However, they either lose the generalization ability to diverse instances or perform conservatively due to neglecting instance-specific information. In this paper, we propose a bi-level approach to learn robust manipulation primitives, including parameter-augmented policy learning using multiple models with tensor approximation, and parameter-conditioned policy retrieval through domain contraction. This approach unifies domain randomization and domain adaptation, providing optimal behaviors while keeping generalization ability. We validate the proposed method on three contact-rich manipulation primitives: hitting, pushing, and reorientation. The experimental results showcase the superior performance of our approach in generating robust policies for instances with diverse physical parameters.} }
Endnote
%0 Conference Paper %T Robust Manipulation Primitive Learning via Domain Contraction %A Teng Xue %A Amirreza Razmjoo %A Suhan Shetty %A Sylvain Calinon %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-xue25a %I PMLR %P 794--809 %U https://proceedings.mlr.press/v270/xue25a.html %V 270 %X Contact-rich manipulation plays an important role in everyday life, but uncertain parameters pose significant challenges to model-based planning and control. To address this issue, domain adaptation and domain randomization have been proposed to learn robust policies. However, they either lose the generalization ability to diverse instances or perform conservatively due to neglecting instance-specific information. In this paper, we propose a bi-level approach to learn robust manipulation primitives, including parameter-augmented policy learning using multiple models with tensor approximation, and parameter-conditioned policy retrieval through domain contraction. This approach unifies domain randomization and domain adaptation, providing optimal behaviors while keeping generalization ability. We validate the proposed method on three contact-rich manipulation primitives: hitting, pushing, and reorientation. The experimental results showcase the superior performance of our approach in generating robust policies for instances with diverse physical parameters.
APA
Xue, T., Razmjoo, A., Shetty, S. & Calinon, S.. (2025). Robust Manipulation Primitive Learning via Domain Contraction. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:794-809 Available from https://proceedings.mlr.press/v270/xue25a.html.

Related Material