Logarithmic regret of exploration in average reward Markov decision processes

Victor Boone, Bruno Gaujal
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:454-533, 2025.

Abstract

In average reward Markov decision processes, state-of-the-art algorithms for regret minimization follow a well-established framework: They are model-based, optimistic and episodic. First, they maintain a confidence region from which optimistic policies are computed using a well-known subroutine called Extended Value Iteration (EVI). Second, these policies are used over time windows called episodes, each ended by the Doubling Trick (DT) rule or a variant thereof. In this work, without modifying EVI, we show that there is a significant advantage in replacing the doubling trick by another simple rule, that we call the Vanishing Multiplicative rule (VM). When managing episodes with VM, the algorithm’s regret is, both in theory and in practice, as good if not better than with DT while the one-shot behavior is greatly improved. More specifically, the management of bad episodes (when sub-optimal policies are being used) is much better under VM than DT by making the regret of exploration logarithmic rather than linear. These results are made possible by a new in-depth understanding of the contrasting behaviors of confidence regions during good and bad episodes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-boone25a, title = {Logarithmic regret of exploration in average reward Markov decision processes}, author = {Boone, Victor and Gaujal, Bruno}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {454--533}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/boone25a/boone25a.pdf}, url = {https://proceedings.mlr.press/v291/boone25a.html}, abstract = {In average reward Markov decision processes, state-of-the-art algorithms for regret minimization follow a well-established framework: They are model-based, optimistic and episodic. First, they maintain a confidence region from which optimistic policies are computed using a well-known subroutine called Extended Value Iteration (EVI). Second, these policies are used over time windows called episodes, each ended by the Doubling Trick (DT) rule or a variant thereof. In this work, without modifying EVI, we show that there is a significant advantage in replacing the doubling trick by another simple rule, that we call the Vanishing Multiplicative rule (VM). When managing episodes with VM, the algorithm’s regret is, both in theory and in practice, as good if not better than with DT while the one-shot behavior is greatly improved. More specifically, the management of bad episodes (when sub-optimal policies are being used) is much better under VM than DT by making the regret of exploration logarithmic rather than linear. These results are made possible by a new in-depth understanding of the contrasting behaviors of confidence regions during good and bad episodes.} }
Endnote
%0 Conference Paper %T Logarithmic regret of exploration in average reward Markov decision processes %A Victor Boone %A Bruno Gaujal %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-boone25a %I PMLR %P 454--533 %U https://proceedings.mlr.press/v291/boone25a.html %V 291 %X In average reward Markov decision processes, state-of-the-art algorithms for regret minimization follow a well-established framework: They are model-based, optimistic and episodic. First, they maintain a confidence region from which optimistic policies are computed using a well-known subroutine called Extended Value Iteration (EVI). Second, these policies are used over time windows called episodes, each ended by the Doubling Trick (DT) rule or a variant thereof. In this work, without modifying EVI, we show that there is a significant advantage in replacing the doubling trick by another simple rule, that we call the Vanishing Multiplicative rule (VM). When managing episodes with VM, the algorithm’s regret is, both in theory and in practice, as good if not better than with DT while the one-shot behavior is greatly improved. More specifically, the management of bad episodes (when sub-optimal policies are being used) is much better under VM than DT by making the regret of exploration logarithmic rather than linear. These results are made possible by a new in-depth understanding of the contrasting behaviors of confidence regions during good and bad episodes.
APA
Boone, V. & Gaujal, B.. (2025). Logarithmic regret of exploration in average reward Markov decision processes. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:454-533 Available from https://proceedings.mlr.press/v291/boone25a.html.

Related Material