Rethinking Optimization with Differentiable Simulation from a Global Perspective

Rika Antonova, Jingyun Yang, Krishna Murthy Jatavallabhula, Jeannette Bohg
Proceedings of The 6th Conference on Robot Learning, PMLR 205:276-286, 2023.

Abstract

Differentiable simulation is a promising toolkit for fast gradient-based policy optimization and system identification. However, existing approaches to differentiable simulation have largely tackled scenarios where obtaining smooth gradients has been relatively easy, such as systems with mostly smooth dynamics. In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios. We analyze the optimization landscapes of diverse scenarios that contain both rigid bodies and deformable objects. In dynamic environments with highly deformable objects and fluids, differentiable simulators produce rugged landscapes with nonetheless useful gradients in some parts of the space. We propose a method that combines Bayesian optimization with semi-local ’leaps’ to obtain a global search method that can use gradients effectively, while also maintaining robust performance in regions with noisy gradients. We show that our approach outperforms several gradient-based and gradient-free baselines on an extensive set of experiments in simulation, and also validate the method using experiments with a real robot and deformables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-antonova23a, title = {Rethinking Optimization with Differentiable Simulation from a Global Perspective}, author = {Antonova, Rika and Yang, Jingyun and Jatavallabhula, Krishna Murthy and Bohg, Jeannette}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {276--286}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/antonova23a/antonova23a.pdf}, url = {https://proceedings.mlr.press/v205/antonova23a.html}, abstract = {Differentiable simulation is a promising toolkit for fast gradient-based policy optimization and system identification. However, existing approaches to differentiable simulation have largely tackled scenarios where obtaining smooth gradients has been relatively easy, such as systems with mostly smooth dynamics. In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios. We analyze the optimization landscapes of diverse scenarios that contain both rigid bodies and deformable objects. In dynamic environments with highly deformable objects and fluids, differentiable simulators produce rugged landscapes with nonetheless useful gradients in some parts of the space. We propose a method that combines Bayesian optimization with semi-local ’leaps’ to obtain a global search method that can use gradients effectively, while also maintaining robust performance in regions with noisy gradients. We show that our approach outperforms several gradient-based and gradient-free baselines on an extensive set of experiments in simulation, and also validate the method using experiments with a real robot and deformables.} }
Endnote
%0 Conference Paper %T Rethinking Optimization with Differentiable Simulation from a Global Perspective %A Rika Antonova %A Jingyun Yang %A Krishna Murthy Jatavallabhula %A Jeannette Bohg %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-antonova23a %I PMLR %P 276--286 %U https://proceedings.mlr.press/v205/antonova23a.html %V 205 %X Differentiable simulation is a promising toolkit for fast gradient-based policy optimization and system identification. However, existing approaches to differentiable simulation have largely tackled scenarios where obtaining smooth gradients has been relatively easy, such as systems with mostly smooth dynamics. In this work, we study the challenges that differentiable simulation presents when it is not feasible to expect that a single descent reaches a global optimum, which is often a problem in contact-rich scenarios. We analyze the optimization landscapes of diverse scenarios that contain both rigid bodies and deformable objects. In dynamic environments with highly deformable objects and fluids, differentiable simulators produce rugged landscapes with nonetheless useful gradients in some parts of the space. We propose a method that combines Bayesian optimization with semi-local ’leaps’ to obtain a global search method that can use gradients effectively, while also maintaining robust performance in regions with noisy gradients. We show that our approach outperforms several gradient-based and gradient-free baselines on an extensive set of experiments in simulation, and also validate the method using experiments with a real robot and deformables.
APA
Antonova, R., Yang, J., Jatavallabhula, K.M. & Bohg, J.. (2023). Rethinking Optimization with Differentiable Simulation from a Global Perspective. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:276-286 Available from https://proceedings.mlr.press/v205/antonova23a.html.

Related Material