Online Markov Decision Processes with Terminal Law Constraints

Bianca Marin Moreno, Margaux Brégère, Pierre Gaillard, Nadia Oudjane
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-52, 2026.

Abstract

Traditional reinforcement learning usually assumes either episodic interactions with resets or continuous operation to minimize average or cumulative loss. While episodic settings have many theoretical results, resets are often unrealistic in practice. The infinite-horizon setting avoids this issue but lacks non-asymptotic guarantees in online scenarios with unknown dynamics. In this work, we move towards closing this gap by introducing a reset-free framework called the *periodic* framework, where the goal is to find *periodic policies*: policies that not only minimize cumulative loss but also return the agents to their initial state distribution after a fixed number of steps. We formalize the problem of finding optimal periodic policies and identify sufficient conditions under which it is well-defined for tabular Markov decision processes. To evaluate algorithms in this framework, we introduce the \emph{periodic regret}, a measure that balances cumulative loss with the terminal law constraint. We then propose the first algorithms for computing periodic policies in two multi-agent settings and show they achieve sublinear periodic regret of order $\tilde{O}(T^{3/4})$. This provides the first non-asymptotic guarantees for reset-free learning in the setting of $M$ homogeneous agents, for any $M > 1$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-moreno26a, title = {Online Markov Decision Processes with Terminal Law Constraints}, author = {Moreno, Bianca Marin and Br{\'e}g{\`e}re, Margaux and Gaillard, Pierre and Oudjane, Nadia}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--52}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/moreno26a/moreno26a.pdf}, url = {https://proceedings.mlr.press/v313/moreno26a.html}, abstract = {Traditional reinforcement learning usually assumes either episodic interactions with resets or continuous operation to minimize average or cumulative loss. While episodic settings have many theoretical results, resets are often unrealistic in practice. The infinite-horizon setting avoids this issue but lacks non-asymptotic guarantees in online scenarios with unknown dynamics. In this work, we move towards closing this gap by introducing a reset-free framework called the *periodic* framework, where the goal is to find *periodic policies*: policies that not only minimize cumulative loss but also return the agents to their initial state distribution after a fixed number of steps. We formalize the problem of finding optimal periodic policies and identify sufficient conditions under which it is well-defined for tabular Markov decision processes. To evaluate algorithms in this framework, we introduce the \emph{periodic regret}, a measure that balances cumulative loss with the terminal law constraint. We then propose the first algorithms for computing periodic policies in two multi-agent settings and show they achieve sublinear periodic regret of order $\tilde{O}(T^{3/4})$. This provides the first non-asymptotic guarantees for reset-free learning in the setting of $M$ homogeneous agents, for any $M > 1$.} }
Endnote
%0 Conference Paper %T Online Markov Decision Processes with Terminal Law Constraints %A Bianca Marin Moreno %A Margaux Brégère %A Pierre Gaillard %A Nadia Oudjane %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-moreno26a %I PMLR %P 1--52 %U https://proceedings.mlr.press/v313/moreno26a.html %V 313 %X Traditional reinforcement learning usually assumes either episodic interactions with resets or continuous operation to minimize average or cumulative loss. While episodic settings have many theoretical results, resets are often unrealistic in practice. The infinite-horizon setting avoids this issue but lacks non-asymptotic guarantees in online scenarios with unknown dynamics. In this work, we move towards closing this gap by introducing a reset-free framework called the *periodic* framework, where the goal is to find *periodic policies*: policies that not only minimize cumulative loss but also return the agents to their initial state distribution after a fixed number of steps. We formalize the problem of finding optimal periodic policies and identify sufficient conditions under which it is well-defined for tabular Markov decision processes. To evaluate algorithms in this framework, we introduce the \emph{periodic regret}, a measure that balances cumulative loss with the terminal law constraint. We then propose the first algorithms for computing periodic policies in two multi-agent settings and show they achieve sublinear periodic regret of order $\tilde{O}(T^{3/4})$. This provides the first non-asymptotic guarantees for reset-free learning in the setting of $M$ homogeneous agents, for any $M > 1$.
APA
Moreno, B.M., Brégère, M., Gaillard, P. & Oudjane, N.. (2026). Online Markov Decision Processes with Terminal Law Constraints. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-52 Available from https://proceedings.mlr.press/v313/moreno26a.html.

Related Material