Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration

Seungyul Han, Youngchul Sung
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4018-4029, 2021.

Abstract

In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-han21a, title = {Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration}, author = {Han, Seungyul and Sung, Youngchul}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4018--4029}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/han21a/han21a.pdf}, url = {https://proceedings.mlr.press/v139/han21a.html}, abstract = {In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.} }
Endnote
%0 Conference Paper %T Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration %A Seungyul Han %A Youngchul Sung %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-han21a %I PMLR %P 4018--4029 %U https://proceedings.mlr.press/v139/han21a.html %V 139 %X In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.
APA
Han, S. & Sung, Y.. (2021). Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4018-4029 Available from https://proceedings.mlr.press/v139/han21a.html.

Related Material