Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems

Osbert Bastani
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3858-3869, 2020.

Abstract

Reinforcement learning is a promising approach to learning robotics controllers. It has recently been shown that algorithms based on finite-difference estimates of the policy gradient are competitive with algorithms based on the policy gradient theorem. We propose a theoretical framework for understanding this phenomenon. Our key insight is that many dynamical systems (especially those of interest in robotics control tasks) are nearly deterministic—i.e., they can be modeled as a deterministic system with a small stochastic perturbation. We show that for such systems, finite-difference estimates of the policy gradient can have substantially lower variance than estimates based on the policy gradient theorem. Finally, we empirically evaluate our insights in an experiment on the inverted pendulum.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-bastani20a, title = {Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems}, author = {Bastani, Osbert}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3858--3869}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/bastani20a/bastani20a.pdf}, url = {https://proceedings.mlr.press/v108/bastani20a.html}, abstract = {Reinforcement learning is a promising approach to learning robotics controllers. It has recently been shown that algorithms based on finite-difference estimates of the policy gradient are competitive with algorithms based on the policy gradient theorem. We propose a theoretical framework for understanding this phenomenon. Our key insight is that many dynamical systems (especially those of interest in robotics control tasks) are nearly deterministic—i.e., they can be modeled as a deterministic system with a small stochastic perturbation. We show that for such systems, finite-difference estimates of the policy gradient can have substantially lower variance than estimates based on the policy gradient theorem. Finally, we empirically evaluate our insights in an experiment on the inverted pendulum.} }
Endnote
%0 Conference Paper %T Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems %A Osbert Bastani %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-bastani20a %I PMLR %P 3858--3869 %U https://proceedings.mlr.press/v108/bastani20a.html %V 108 %X Reinforcement learning is a promising approach to learning robotics controllers. It has recently been shown that algorithms based on finite-difference estimates of the policy gradient are competitive with algorithms based on the policy gradient theorem. We propose a theoretical framework for understanding this phenomenon. Our key insight is that many dynamical systems (especially those of interest in robotics control tasks) are nearly deterministic—i.e., they can be modeled as a deterministic system with a small stochastic perturbation. We show that for such systems, finite-difference estimates of the policy gradient can have substantially lower variance than estimates based on the policy gradient theorem. Finally, we empirically evaluate our insights in an experiment on the inverted pendulum.
APA
Bastani, O.. (2020). Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3858-3869 Available from https://proceedings.mlr.press/v108/bastani20a.html.

Related Material