An Alternate Policy Gradient Estimator for Softmax Policies

Shivam Garg, Samuele Tosatto, Yangchen Pan, Martha White, Rupam Mahmood
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6630-6689, 2022.

Abstract

Policy gradient (PG) estimators are ineffective in dealing with softmax policies that are sub-optimally saturated, which refers to the situation when the policy concentrates its probability mass on sub-optimal actions. Sub-optimal policy saturation may arise from bad policy initialization or sudden changes in the environment that occur after the policy has already converged. Current softmax PG estimators require a large number of updates to overcome policy saturation, which causes low sample efficiency and poor adaptability to new situations. To mitigate this problem, we propose a novel PG estimator for softmax policies that utilizes the bias in the critic estimate and the noise present in the reward signal to escape the saturated regions of the policy parameter space. Our theoretical analysis and experiments, conducted on bandits and various reinforcement learning environments, show that this new estimator is significantly more robust to policy saturation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-garg22b, title = { An Alternate Policy Gradient Estimator for Softmax Policies }, author = {Garg, Shivam and Tosatto, Samuele and Pan, Yangchen and White, Martha and Mahmood, Rupam}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {6630--6689}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/garg22b/garg22b.pdf}, url = {https://proceedings.mlr.press/v151/garg22b.html}, abstract = { Policy gradient (PG) estimators are ineffective in dealing with softmax policies that are sub-optimally saturated, which refers to the situation when the policy concentrates its probability mass on sub-optimal actions. Sub-optimal policy saturation may arise from bad policy initialization or sudden changes in the environment that occur after the policy has already converged. Current softmax PG estimators require a large number of updates to overcome policy saturation, which causes low sample efficiency and poor adaptability to new situations. To mitigate this problem, we propose a novel PG estimator for softmax policies that utilizes the bias in the critic estimate and the noise present in the reward signal to escape the saturated regions of the policy parameter space. Our theoretical analysis and experiments, conducted on bandits and various reinforcement learning environments, show that this new estimator is significantly more robust to policy saturation. } }
Endnote
%0 Conference Paper %T An Alternate Policy Gradient Estimator for Softmax Policies %A Shivam Garg %A Samuele Tosatto %A Yangchen Pan %A Martha White %A Rupam Mahmood %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-garg22b %I PMLR %P 6630--6689 %U https://proceedings.mlr.press/v151/garg22b.html %V 151 %X Policy gradient (PG) estimators are ineffective in dealing with softmax policies that are sub-optimally saturated, which refers to the situation when the policy concentrates its probability mass on sub-optimal actions. Sub-optimal policy saturation may arise from bad policy initialization or sudden changes in the environment that occur after the policy has already converged. Current softmax PG estimators require a large number of updates to overcome policy saturation, which causes low sample efficiency and poor adaptability to new situations. To mitigate this problem, we propose a novel PG estimator for softmax policies that utilizes the bias in the critic estimate and the noise present in the reward signal to escape the saturated regions of the policy parameter space. Our theoretical analysis and experiments, conducted on bandits and various reinforcement learning environments, show that this new estimator is significantly more robust to policy saturation.
APA
Garg, S., Tosatto, S., Pan, Y., White, M. & Mahmood, R.. (2022). An Alternate Policy Gradient Estimator for Softmax Policies . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:6630-6689 Available from https://proceedings.mlr.press/v151/garg22b.html.

Related Material