Trust Region Policy Optimization

John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz
; Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1889-1897, 2015.

Abstract

In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-schulman15, title = {Trust Region Policy Optimization}, author = {John Schulman and Sergey Levine and Pieter Abbeel and Michael Jordan and Philipp Moritz}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1889--1897}, year = {2015}, editor = {Francis Bach and David Blei}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/schulman15.pdf}, url = {http://proceedings.mlr.press/v37/schulman15.html}, abstract = {In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.} }
Endnote
%0 Conference Paper %T Trust Region Policy Optimization %A John Schulman %A Sergey Levine %A Pieter Abbeel %A Michael Jordan %A Philipp Moritz %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-schulman15 %I PMLR %J Proceedings of Machine Learning Research %P 1889--1897 %U http://proceedings.mlr.press %V 37 %W PMLR %X In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
RIS
TY - CPAPER TI - Trust Region Policy Optimization AU - John Schulman AU - Sergey Levine AU - Pieter Abbeel AU - Michael Jordan AU - Philipp Moritz BT - Proceedings of the 32nd International Conference on Machine Learning PY - 2015/06/01 DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-schulman15 PB - PMLR SP - 1889 DP - PMLR EP - 1897 L1 - http://proceedings.mlr.press/v37/schulman15.pdf UR - http://proceedings.mlr.press/v37/schulman15.html AB - In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. ER -
APA
Schulman, J., Levine, S., Abbeel, P., Jordan, M. & Moritz, P.. (2015). Trust Region Policy Optimization. Proceedings of the 32nd International Conference on Machine Learning, in PMLR 37:1889-1897

Related Material