DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics

Tyler Ga Wei Lum, Martin Matak, Viktor Makoviychuk, Ankur Handa, Arthur Allshire, Tucker Hermans, Nathan D. Ratliff, Karl Van Wyk
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3182-3211, 2025.

Abstract

A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, and generality, along with limited or no hardware safety guarantees. In this work, we introduce DextrAH-G, a depth-based dexterous grasping policy trained entirely in simulation that combines reinforcement learning, geometric fabrics, and teacher-student distillation. We address key challenges in joint arm-hand policy learning, such as high-dimensional observation and action spaces, the sim2real gap, collision avoidance, and hardware constraints. DextrAH-G enables a 23 motor arm-hand robot to safely and continuously grasp and transport a large variety of objects at high speed using multi-modal inputs including depth images, allowing generalization across object geometry. Videos at https://sites.google.com/view/dextrah-g.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-lum25a, title = {DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics}, author = {Lum, Tyler Ga Wei and Matak, Martin and Makoviychuk, Viktor and Handa, Ankur and Allshire, Arthur and Hermans, Tucker and Ratliff, Nathan D. and Wyk, Karl Van}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3182--3211}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/lum25a/lum25a.pdf}, url = {https://proceedings.mlr.press/v270/lum25a.html}, abstract = {A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, and generality, along with limited or no hardware safety guarantees. In this work, we introduce DextrAH-G, a depth-based dexterous grasping policy trained entirely in simulation that combines reinforcement learning, geometric fabrics, and teacher-student distillation. We address key challenges in joint arm-hand policy learning, such as high-dimensional observation and action spaces, the sim2real gap, collision avoidance, and hardware constraints. DextrAH-G enables a 23 motor arm-hand robot to safely and continuously grasp and transport a large variety of objects at high speed using multi-modal inputs including depth images, allowing generalization across object geometry. Videos at https://sites.google.com/view/dextrah-g.} }
Endnote
%0 Conference Paper %T DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics %A Tyler Ga Wei Lum %A Martin Matak %A Viktor Makoviychuk %A Ankur Handa %A Arthur Allshire %A Tucker Hermans %A Nathan D. Ratliff %A Karl Van Wyk %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-lum25a %I PMLR %P 3182--3211 %U https://proceedings.mlr.press/v270/lum25a.html %V 270 %X A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, and generality, along with limited or no hardware safety guarantees. In this work, we introduce DextrAH-G, a depth-based dexterous grasping policy trained entirely in simulation that combines reinforcement learning, geometric fabrics, and teacher-student distillation. We address key challenges in joint arm-hand policy learning, such as high-dimensional observation and action spaces, the sim2real gap, collision avoidance, and hardware constraints. DextrAH-G enables a 23 motor arm-hand robot to safely and continuously grasp and transport a large variety of objects at high speed using multi-modal inputs including depth images, allowing generalization across object geometry. Videos at https://sites.google.com/view/dextrah-g.
APA
Lum, T.G.W., Matak, M., Makoviychuk, V., Handa, A., Allshire, A., Hermans, T., Ratliff, N.D. & Wyk, K.V.. (2025). DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3182-3211 Available from https://proceedings.mlr.press/v270/lum25a.html.

Related Material