Neural Geometric Fabrics: Efficiently Learning High-Dimensional Policies from Demonstration

Mandy Xie, Ankur Handa, Stephen Tyree, Dieter Fox, Harish Ravichandar, Nathan D. Ratliff, Karl Van Wyk
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1355-1367, 2023.

Abstract

Learning dexterous manipulation policies for multi-fingered robots has been a long-standing challenge in robotics. Existing methods either limit themselves to highly constrained problems and smaller models to achieve extreme sample efficiency or sacrifice sample efficiency to gain capacity to solve more complex tasks with deep neural networks. In this work, we develop a structured approach to sample-efficient learning of dexterous manipulation skills from demonstrations by leveraging recent advances in robot motion generation and control. Specifically, our policy structure is induced by Geometric Fabrics - a recent framework that generalizes classical mechanical systems to allow for flexible design of expressive robot motions. To avoid the cumbersome manual design required by existing motion generators, we introduce Neural Geometric Fabric (NGF) - a framework that learns Geometric Fabric-based policies from data. NGF policies are provably stable and capable of encoding speed-invariant geometries of complex motions in multiple task spaces simultaneously. We demonstrate that NGFs can learn to perform a variety of dexterous manipulation tasks on a 23-DoF hand-arm physical robotic platform purely from demonstrations. Results from comprehensive comparative and ablative experiments show that NGF’s structure and action spaces help learn acceleration-based policies that consistently outperform state-of-the-art baselines like Riemannian Motion Policies (RMPs), and other commonly used networks, such as feed-forward and recurrent neural networks. More importantly, we demonstrate that NGFs do not rely on often-used and expertly-designed operational-space controllers, promoting an advancement towards efficiently learning safe, stable, and high-dimensional controllers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-xie23a, title = {Neural Geometric Fabrics: Efficiently Learning High-Dimensional Policies from Demonstration}, author = {Xie, Mandy and Handa, Ankur and Tyree, Stephen and Fox, Dieter and Ravichandar, Harish and Ratliff, Nathan D. and Wyk, Karl Van}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1355--1367}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/xie23a/xie23a.pdf}, url = {https://proceedings.mlr.press/v205/xie23a.html}, abstract = {Learning dexterous manipulation policies for multi-fingered robots has been a long-standing challenge in robotics. Existing methods either limit themselves to highly constrained problems and smaller models to achieve extreme sample efficiency or sacrifice sample efficiency to gain capacity to solve more complex tasks with deep neural networks. In this work, we develop a structured approach to sample-efficient learning of dexterous manipulation skills from demonstrations by leveraging recent advances in robot motion generation and control. Specifically, our policy structure is induced by Geometric Fabrics - a recent framework that generalizes classical mechanical systems to allow for flexible design of expressive robot motions. To avoid the cumbersome manual design required by existing motion generators, we introduce Neural Geometric Fabric (NGF) - a framework that learns Geometric Fabric-based policies from data. NGF policies are provably stable and capable of encoding speed-invariant geometries of complex motions in multiple task spaces simultaneously. We demonstrate that NGFs can learn to perform a variety of dexterous manipulation tasks on a 23-DoF hand-arm physical robotic platform purely from demonstrations. Results from comprehensive comparative and ablative experiments show that NGF’s structure and action spaces help learn acceleration-based policies that consistently outperform state-of-the-art baselines like Riemannian Motion Policies (RMPs), and other commonly used networks, such as feed-forward and recurrent neural networks. More importantly, we demonstrate that NGFs do not rely on often-used and expertly-designed operational-space controllers, promoting an advancement towards efficiently learning safe, stable, and high-dimensional controllers.} }
Endnote
%0 Conference Paper %T Neural Geometric Fabrics: Efficiently Learning High-Dimensional Policies from Demonstration %A Mandy Xie %A Ankur Handa %A Stephen Tyree %A Dieter Fox %A Harish Ravichandar %A Nathan D. Ratliff %A Karl Van Wyk %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-xie23a %I PMLR %P 1355--1367 %U https://proceedings.mlr.press/v205/xie23a.html %V 205 %X Learning dexterous manipulation policies for multi-fingered robots has been a long-standing challenge in robotics. Existing methods either limit themselves to highly constrained problems and smaller models to achieve extreme sample efficiency or sacrifice sample efficiency to gain capacity to solve more complex tasks with deep neural networks. In this work, we develop a structured approach to sample-efficient learning of dexterous manipulation skills from demonstrations by leveraging recent advances in robot motion generation and control. Specifically, our policy structure is induced by Geometric Fabrics - a recent framework that generalizes classical mechanical systems to allow for flexible design of expressive robot motions. To avoid the cumbersome manual design required by existing motion generators, we introduce Neural Geometric Fabric (NGF) - a framework that learns Geometric Fabric-based policies from data. NGF policies are provably stable and capable of encoding speed-invariant geometries of complex motions in multiple task spaces simultaneously. We demonstrate that NGFs can learn to perform a variety of dexterous manipulation tasks on a 23-DoF hand-arm physical robotic platform purely from demonstrations. Results from comprehensive comparative and ablative experiments show that NGF’s structure and action spaces help learn acceleration-based policies that consistently outperform state-of-the-art baselines like Riemannian Motion Policies (RMPs), and other commonly used networks, such as feed-forward and recurrent neural networks. More importantly, we demonstrate that NGFs do not rely on often-used and expertly-designed operational-space controllers, promoting an advancement towards efficiently learning safe, stable, and high-dimensional controllers.
APA
Xie, M., Handa, A., Tyree, S., Fox, D., Ravichandar, H., Ratliff, N.D. & Wyk, K.V.. (2023). Neural Geometric Fabrics: Efficiently Learning High-Dimensional Policies from Demonstration. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1355-1367 Available from https://proceedings.mlr.press/v205/xie23a.html.

Related Material