Understanding the Impact of Entropy on Policy Optimization

Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, Dale Schuurmans
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:151-160, 2019.

Abstract

Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help with exploration by encouraging the selection of more stochastic policies. In this work, we analyze this claim using new visualizations of the optimization landscape based on randomly perturbing the loss function. We first show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. We then qualitatively show that in some environments, a policy with higher entropy can make the optimization landscape smoother, thereby connecting local optima and enabling the use of larger learning rates. This paper presents new tools for understanding the optimization landscape, shows that policy entropy serves as a regularizer, and highlights the challenge of designing general-purpose policy optimization algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-ahmed19a, title = {Understanding the Impact of Entropy on Policy Optimization}, author = {Ahmed, Zafarali and Le Roux, Nicolas and Norouzi, Mohammad and Schuurmans, Dale}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {151--160}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/ahmed19a/ahmed19a.pdf}, url = {https://proceedings.mlr.press/v97/ahmed19a.html}, abstract = {Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help with exploration by encouraging the selection of more stochastic policies. In this work, we analyze this claim using new visualizations of the optimization landscape based on randomly perturbing the loss function. We first show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. We then qualitatively show that in some environments, a policy with higher entropy can make the optimization landscape smoother, thereby connecting local optima and enabling the use of larger learning rates. This paper presents new tools for understanding the optimization landscape, shows that policy entropy serves as a regularizer, and highlights the challenge of designing general-purpose policy optimization algorithms.} }
Endnote
%0 Conference Paper %T Understanding the Impact of Entropy on Policy Optimization %A Zafarali Ahmed %A Nicolas Le Roux %A Mohammad Norouzi %A Dale Schuurmans %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-ahmed19a %I PMLR %P 151--160 %U https://proceedings.mlr.press/v97/ahmed19a.html %V 97 %X Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help with exploration by encouraging the selection of more stochastic policies. In this work, we analyze this claim using new visualizations of the optimization landscape based on randomly perturbing the loss function. We first show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. We then qualitatively show that in some environments, a policy with higher entropy can make the optimization landscape smoother, thereby connecting local optima and enabling the use of larger learning rates. This paper presents new tools for understanding the optimization landscape, shows that policy entropy serves as a regularizer, and highlights the challenge of designing general-purpose policy optimization algorithms.
APA
Ahmed, Z., Roux, N.L., Norouzi, M. & Schuurmans, D.. (2019). Understanding the Impact of Entropy on Policy Optimization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:151-160 Available from https://proceedings.mlr.press/v97/ahmed19a.html.

Related Material