A Parametric Class of Approximate Gradient Updates for Policy Optimization

Ramki Gummadi, Saurabh Kumar, Junfeng Wen, Dale Schuurmans
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:7998-8015, 2022.

Abstract

Approaches to policy optimization have been motivated from diverse principles, based on how the parametric model is interpreted (e.g. value versus policy representation) or how the learning objective is formulated, yet they share a common goal of maximizing expected return. To better capture the commonalities and identify key differences between policy optimization methods, we develop a unified perspective that re-expresses the underlying updates in terms of a limited choice of gradient form and scaling function. In particular, we identify a parameterized space of approximate gradient updates for policy optimization that is highly structured, yet covers both classical and recent examples, including PPO. As a result, we obtain novel yet well motivated updates that generalize existing algorithms in a way that can deliver benefits both in terms of convergence speed and final result quality. An experimental investigation demonstrates that the additional degrees of freedom provided in the parameterized family of updates can be leveraged to obtain non-trivial improvements both in synthetic domains and on popular deep RL benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-gummadi22a, title = {A Parametric Class of Approximate Gradient Updates for Policy Optimization}, author = {Gummadi, Ramki and Kumar, Saurabh and Wen, Junfeng and Schuurmans, Dale}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {7998--8015}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/gummadi22a/gummadi22a.pdf}, url = {https://proceedings.mlr.press/v162/gummadi22a.html}, abstract = {Approaches to policy optimization have been motivated from diverse principles, based on how the parametric model is interpreted (e.g. value versus policy representation) or how the learning objective is formulated, yet they share a common goal of maximizing expected return. To better capture the commonalities and identify key differences between policy optimization methods, we develop a unified perspective that re-expresses the underlying updates in terms of a limited choice of gradient form and scaling function. In particular, we identify a parameterized space of approximate gradient updates for policy optimization that is highly structured, yet covers both classical and recent examples, including PPO. As a result, we obtain novel yet well motivated updates that generalize existing algorithms in a way that can deliver benefits both in terms of convergence speed and final result quality. An experimental investigation demonstrates that the additional degrees of freedom provided in the parameterized family of updates can be leveraged to obtain non-trivial improvements both in synthetic domains and on popular deep RL benchmarks.} }
Endnote
%0 Conference Paper %T A Parametric Class of Approximate Gradient Updates for Policy Optimization %A Ramki Gummadi %A Saurabh Kumar %A Junfeng Wen %A Dale Schuurmans %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-gummadi22a %I PMLR %P 7998--8015 %U https://proceedings.mlr.press/v162/gummadi22a.html %V 162 %X Approaches to policy optimization have been motivated from diverse principles, based on how the parametric model is interpreted (e.g. value versus policy representation) or how the learning objective is formulated, yet they share a common goal of maximizing expected return. To better capture the commonalities and identify key differences between policy optimization methods, we develop a unified perspective that re-expresses the underlying updates in terms of a limited choice of gradient form and scaling function. In particular, we identify a parameterized space of approximate gradient updates for policy optimization that is highly structured, yet covers both classical and recent examples, including PPO. As a result, we obtain novel yet well motivated updates that generalize existing algorithms in a way that can deliver benefits both in terms of convergence speed and final result quality. An experimental investigation demonstrates that the additional degrees of freedom provided in the parameterized family of updates can be leveraged to obtain non-trivial improvements both in synthetic domains and on popular deep RL benchmarks.
APA
Gummadi, R., Kumar, S., Wen, J. & Schuurmans, D.. (2022). A Parametric Class of Approximate Gradient Updates for Policy Optimization. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:7998-8015 Available from https://proceedings.mlr.press/v162/gummadi22a.html.

Related Material