Rate-Optimal Policy Optimization for Linear Markov Decision Processes

Uri Sherman, Alon Cohen, Tomer Koren, Yishay Mansour
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:44815-44837, 2024.

Abstract

We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal $\widetilde O (\sqrt K)$ regret where $K$ denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of $K$) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-sherman24a, title = {Rate-Optimal Policy Optimization for Linear {M}arkov Decision Processes}, author = {Sherman, Uri and Cohen, Alon and Koren, Tomer and Mansour, Yishay}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {44815--44837}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/sherman24a/sherman24a.pdf}, url = {https://proceedings.mlr.press/v235/sherman24a.html}, abstract = {We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal $\widetilde O (\sqrt K)$ regret where $K$ denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of $K$) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known.} }
Endnote
%0 Conference Paper %T Rate-Optimal Policy Optimization for Linear Markov Decision Processes %A Uri Sherman %A Alon Cohen %A Tomer Koren %A Yishay Mansour %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-sherman24a %I PMLR %P 44815--44837 %U https://proceedings.mlr.press/v235/sherman24a.html %V 235 %X We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal $\widetilde O (\sqrt K)$ regret where $K$ denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of $K$) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known.
APA
Sherman, U., Cohen, A., Koren, T. & Mansour, Y.. (2024). Rate-Optimal Policy Optimization for Linear Markov Decision Processes. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:44815-44837 Available from https://proceedings.mlr.press/v235/sherman24a.html.

Related Material