Morphologically Symmetric Reinforcement Learning for Ambidextrous Bimanual Manipulation

Zechu Li, Yufeng Jin, Daniel Ordonez-Apraez, Claudio Semini, Puze Liu, Georgia Chalvatzaki
Proceedings of The 9th Conference on Robot Learning, PMLR 305:1953-1974, 2025.

Abstract

Humans naturally exhibit bilateral symmetry in their gross manipulation skills, effortlessly mirroring simple actions between left and right hands. Bimanual robots—which also feature bilateral symmetry—should similarly exploit this property to perform tasks with either hand. Unlike humans, who often favor a dominant hand for fine dexterous skills, robots should ideally execute ambidextrous manipulation with equal proficiency. To this end, we introduce SYMDEX (SYMmetric DEXterity), a reinforcement learning framework for ambidextrous bi-manipulation that leverages the robot’s inherent bilateral symmetry as an inductive bias. SYMDEX decomposes complex bimanual manipulation tasks into per-hand subtasks and trains dedicated policies for each. By exploiting bilateral symmetry via equivariant neural networks, experience from one arm is inherently leveraged by the opposite arm. We then distill the subtask policies into a global ambidextrous policy that is independent of the hand-task assignment. We evaluate SYMDEX on six challenging simulated manipulation tasks and demonstrate successful real-world deployment on two of them. Our approach outperforms baselines on more complex, asymmetric tasks, where the left and right hands perform different roles. We further demonstrate SYMDEX’s scalability by extending it to a four-arm manipulation setup, where our symmetry-aware policies enable effective multi-arm collaboration and coordination. Our results highlight how structural symmetry as inductive bias in policy learning enhances sample efficiency, robustness, and generalization across diverse dexterous manipulation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-li25d, title = {Morphologically Symmetric Reinforcement Learning for Ambidextrous Bimanual Manipulation}, author = {Li, Zechu and Jin, Yufeng and Ordonez-Apraez, Daniel and Semini, Claudio and Liu, Puze and Chalvatzaki, Georgia}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {1953--1974}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/li25d/li25d.pdf}, url = {https://proceedings.mlr.press/v305/li25d.html}, abstract = {Humans naturally exhibit bilateral symmetry in their gross manipulation skills, effortlessly mirroring simple actions between left and right hands. Bimanual robots—which also feature bilateral symmetry—should similarly exploit this property to perform tasks with either hand. Unlike humans, who often favor a dominant hand for fine dexterous skills, robots should ideally execute ambidextrous manipulation with equal proficiency. To this end, we introduce SYMDEX (SYMmetric DEXterity), a reinforcement learning framework for ambidextrous bi-manipulation that leverages the robot’s inherent bilateral symmetry as an inductive bias. SYMDEX decomposes complex bimanual manipulation tasks into per-hand subtasks and trains dedicated policies for each. By exploiting bilateral symmetry via equivariant neural networks, experience from one arm is inherently leveraged by the opposite arm. We then distill the subtask policies into a global ambidextrous policy that is independent of the hand-task assignment. We evaluate SYMDEX on six challenging simulated manipulation tasks and demonstrate successful real-world deployment on two of them. Our approach outperforms baselines on more complex, asymmetric tasks, where the left and right hands perform different roles. We further demonstrate SYMDEX’s scalability by extending it to a four-arm manipulation setup, where our symmetry-aware policies enable effective multi-arm collaboration and coordination. Our results highlight how structural symmetry as inductive bias in policy learning enhances sample efficiency, robustness, and generalization across diverse dexterous manipulation tasks.} }
Endnote
%0 Conference Paper %T Morphologically Symmetric Reinforcement Learning for Ambidextrous Bimanual Manipulation %A Zechu Li %A Yufeng Jin %A Daniel Ordonez-Apraez %A Claudio Semini %A Puze Liu %A Georgia Chalvatzaki %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-li25d %I PMLR %P 1953--1974 %U https://proceedings.mlr.press/v305/li25d.html %V 305 %X Humans naturally exhibit bilateral symmetry in their gross manipulation skills, effortlessly mirroring simple actions between left and right hands. Bimanual robots—which also feature bilateral symmetry—should similarly exploit this property to perform tasks with either hand. Unlike humans, who often favor a dominant hand for fine dexterous skills, robots should ideally execute ambidextrous manipulation with equal proficiency. To this end, we introduce SYMDEX (SYMmetric DEXterity), a reinforcement learning framework for ambidextrous bi-manipulation that leverages the robot’s inherent bilateral symmetry as an inductive bias. SYMDEX decomposes complex bimanual manipulation tasks into per-hand subtasks and trains dedicated policies for each. By exploiting bilateral symmetry via equivariant neural networks, experience from one arm is inherently leveraged by the opposite arm. We then distill the subtask policies into a global ambidextrous policy that is independent of the hand-task assignment. We evaluate SYMDEX on six challenging simulated manipulation tasks and demonstrate successful real-world deployment on two of them. Our approach outperforms baselines on more complex, asymmetric tasks, where the left and right hands perform different roles. We further demonstrate SYMDEX’s scalability by extending it to a four-arm manipulation setup, where our symmetry-aware policies enable effective multi-arm collaboration and coordination. Our results highlight how structural symmetry as inductive bias in policy learning enhances sample efficiency, robustness, and generalization across diverse dexterous manipulation tasks.
APA
Li, Z., Jin, Y., Ordonez-Apraez, D., Semini, C., Liu, P. & Chalvatzaki, G.. (2025). Morphologically Symmetric Reinforcement Learning for Ambidextrous Bimanual Manipulation. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:1953-1974 Available from https://proceedings.mlr.press/v305/li25d.html.

Related Material