Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space

Anas Barakat, Ilyas Fatkhullin, Niao He
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:1753-1800, 2023.

Abstract

We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-barakat23a, title = {Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space}, author = {Barakat, Anas and Fatkhullin, Ilyas and He, Niao}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {1753--1800}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/barakat23a/barakat23a.pdf}, url = {https://proceedings.mlr.press/v202/barakat23a.html}, abstract = {We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.} }
Endnote
%0 Conference Paper %T Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space %A Anas Barakat %A Ilyas Fatkhullin %A Niao He %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-barakat23a %I PMLR %P 1753--1800 %U https://proceedings.mlr.press/v202/barakat23a.html %V 202 %X We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.
APA
Barakat, A., Fatkhullin, I. & He, N.. (2023). Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:1753-1800 Available from https://proceedings.mlr.press/v202/barakat23a.html.

Related Material