Reinforcement Learning with Random Time Horizons

Enric Ribera Borrell, Lorenz Richter, Christof Schuette
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:5101-5123, 2025.

Abstract

We extend the standard reinforcement learning framework to random time horizons. While the classical setting typically assumes finite and deterministic or infinite runtimes of trajectories, we argue that multiple real-world applications naturally exhibit random (potentially trajectory-dependent) stopping times. Since those stopping times typically depend on the policy, their randomness has an effect on policy gradient formulas, which we (mostly for the first time) derive rigorously in this work both for stochastic and deterministic policies. We present two complementary perspectives, trajectory or state-space based, and establish connections to optimal control theory. Our numerical experiments demonstrate that using the proposed formulas can significantly improve optimization convergence compared to traditional approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-borrell25a, title = {Reinforcement Learning with Random Time Horizons}, author = {Borrell, Enric Ribera and Richter, Lorenz and Schuette, Christof}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {5101--5123}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/borrell25a/borrell25a.pdf}, url = {https://proceedings.mlr.press/v267/borrell25a.html}, abstract = {We extend the standard reinforcement learning framework to random time horizons. While the classical setting typically assumes finite and deterministic or infinite runtimes of trajectories, we argue that multiple real-world applications naturally exhibit random (potentially trajectory-dependent) stopping times. Since those stopping times typically depend on the policy, their randomness has an effect on policy gradient formulas, which we (mostly for the first time) derive rigorously in this work both for stochastic and deterministic policies. We present two complementary perspectives, trajectory or state-space based, and establish connections to optimal control theory. Our numerical experiments demonstrate that using the proposed formulas can significantly improve optimization convergence compared to traditional approaches.} }
Endnote
%0 Conference Paper %T Reinforcement Learning with Random Time Horizons %A Enric Ribera Borrell %A Lorenz Richter %A Christof Schuette %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-borrell25a %I PMLR %P 5101--5123 %U https://proceedings.mlr.press/v267/borrell25a.html %V 267 %X We extend the standard reinforcement learning framework to random time horizons. While the classical setting typically assumes finite and deterministic or infinite runtimes of trajectories, we argue that multiple real-world applications naturally exhibit random (potentially trajectory-dependent) stopping times. Since those stopping times typically depend on the policy, their randomness has an effect on policy gradient formulas, which we (mostly for the first time) derive rigorously in this work both for stochastic and deterministic policies. We present two complementary perspectives, trajectory or state-space based, and establish connections to optimal control theory. Our numerical experiments demonstrate that using the proposed formulas can significantly improve optimization convergence compared to traditional approaches.
APA
Borrell, E.R., Richter, L. & Schuette, C.. (2025). Reinforcement Learning with Random Time Horizons. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:5101-5123 Available from https://proceedings.mlr.press/v267/borrell25a.html.

Related Material