Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich Differentiable Simulation

Ignat Georgiev, Krishnan Srinivasan, Jie Xu, Eric Heiden, Animesh Garg
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:15418-15437, 2024.

Abstract

Model-Free Reinforcement Learning (MFRL), leveraging the policy gradient theorem, has demonstrated considerable success in continuous control tasks. However, these approaches are plagued by high gradient variance due to zeroth-order gradient estimation, resulting in suboptimal policies. Conversely, First-Order Model-Based Reinforcement Learning (FO-MBRL) methods employing differentiable simulation provide gradients with reduced variance but are susceptible to sampling error in scenarios involving stiff dynamics, such as physical contact. This paper investigates the source of this error and introduces Adaptive Horizon Actor-Critic (AHAC), an FO-MBRL algorithm that reduces gradient error by adapting the model-based horizon to avoid stiff dynamics. Empirical findings reveal that AHAC outperforms MFRL baselines, attaining 40% more reward across a set of locomotion tasks and efficiently scaling to high-dimensional control environments with improved wall-clock-time efficiency. adaptive-horizon-actor-critic.github.io

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-georgiev24a, title = {Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich Differentiable Simulation}, author = {Georgiev, Ignat and Srinivasan, Krishnan and Xu, Jie and Heiden, Eric and Garg, Animesh}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {15418--15437}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/georgiev24a/georgiev24a.pdf}, url = {https://proceedings.mlr.press/v235/georgiev24a.html}, abstract = {Model-Free Reinforcement Learning (MFRL), leveraging the policy gradient theorem, has demonstrated considerable success in continuous control tasks. However, these approaches are plagued by high gradient variance due to zeroth-order gradient estimation, resulting in suboptimal policies. Conversely, First-Order Model-Based Reinforcement Learning (FO-MBRL) methods employing differentiable simulation provide gradients with reduced variance but are susceptible to sampling error in scenarios involving stiff dynamics, such as physical contact. This paper investigates the source of this error and introduces Adaptive Horizon Actor-Critic (AHAC), an FO-MBRL algorithm that reduces gradient error by adapting the model-based horizon to avoid stiff dynamics. Empirical findings reveal that AHAC outperforms MFRL baselines, attaining 40% more reward across a set of locomotion tasks and efficiently scaling to high-dimensional control environments with improved wall-clock-time efficiency. adaptive-horizon-actor-critic.github.io} }
Endnote
%0 Conference Paper %T Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich Differentiable Simulation %A Ignat Georgiev %A Krishnan Srinivasan %A Jie Xu %A Eric Heiden %A Animesh Garg %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-georgiev24a %I PMLR %P 15418--15437 %U https://proceedings.mlr.press/v235/georgiev24a.html %V 235 %X Model-Free Reinforcement Learning (MFRL), leveraging the policy gradient theorem, has demonstrated considerable success in continuous control tasks. However, these approaches are plagued by high gradient variance due to zeroth-order gradient estimation, resulting in suboptimal policies. Conversely, First-Order Model-Based Reinforcement Learning (FO-MBRL) methods employing differentiable simulation provide gradients with reduced variance but are susceptible to sampling error in scenarios involving stiff dynamics, such as physical contact. This paper investigates the source of this error and introduces Adaptive Horizon Actor-Critic (AHAC), an FO-MBRL algorithm that reduces gradient error by adapting the model-based horizon to avoid stiff dynamics. Empirical findings reveal that AHAC outperforms MFRL baselines, attaining 40% more reward across a set of locomotion tasks and efficiently scaling to high-dimensional control environments with improved wall-clock-time efficiency. adaptive-horizon-actor-critic.github.io
APA
Georgiev, I., Srinivasan, K., Xu, J., Heiden, E. & Garg, A.. (2024). Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich Differentiable Simulation. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:15418-15437 Available from https://proceedings.mlr.press/v235/georgiev24a.html.

Related Material