Trust Region Policy Optimization

John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1889-1897, 2015.

Abstract

In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-schulman15, title = {Trust Region Policy Optimization}, author = {Schulman, John and Levine, Sergey and Abbeel, Pieter and Jordan, Michael and Moritz, Philipp}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1889--1897}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/schulman15.pdf}, url = {https://proceedings.mlr.press/v37/schulman15.html}, abstract = {In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.} }
Endnote
%0 Conference Paper %T Trust Region Policy Optimization %A John Schulman %A Sergey Levine %A Pieter Abbeel %A Michael Jordan %A Philipp Moritz %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-schulman15 %I PMLR %P 1889--1897 %U https://proceedings.mlr.press/v37/schulman15.html %V 37 %X In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
RIS
TY - CPAPER TI - Trust Region Policy Optimization AU - John Schulman AU - Sergey Levine AU - Pieter Abbeel AU - Michael Jordan AU - Philipp Moritz BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-schulman15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1889 EP - 1897 L1 - http://proceedings.mlr.press/v37/schulman15.pdf UR - https://proceedings.mlr.press/v37/schulman15.html AB - In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. ER -
APA
Schulman, J., Levine, S., Abbeel, P., Jordan, M. & Moritz, P.. (2015). Trust Region Policy Optimization. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1889-1897 Available from https://proceedings.mlr.press/v37/schulman15.html.

Related Material