Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability

Sergey Samsonov, Daniil Tiapkin, Alexey Naumov, Eric Moulines
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4511-4547, 2024.

Abstract

In this paper we consider the problem of obtaining sharp bounds for the performance of temporal difference (TD) methods with linear function approximation for policy evaluation in discounted Markov decision processes. We show that a simple algorithm with a universal and instance-independent step size together with Polyak-Ruppert tail averaging is sufficient to obtain near-optimal variance and bias terms. We also provide the respective sample complexity bounds. Our proof technique is based on refined error bounds for linear stochastic approximation together with the novel stability result for the product of random matrices that arise from the TD-type recurrence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-samsonov24a, title = {Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability}, author = {Samsonov, Sergey and Tiapkin, Daniil and Naumov, Alexey and Moulines, Eric}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {4511--4547}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/samsonov24a/samsonov24a.pdf}, url = {https://proceedings.mlr.press/v247/samsonov24a.html}, abstract = {In this paper we consider the problem of obtaining sharp bounds for the performance of temporal difference (TD) methods with linear function approximation for policy evaluation in discounted Markov decision processes. We show that a simple algorithm with a universal and instance-independent step size together with Polyak-Ruppert tail averaging is sufficient to obtain near-optimal variance and bias terms. We also provide the respective sample complexity bounds. Our proof technique is based on refined error bounds for linear stochastic approximation together with the novel stability result for the product of random matrices that arise from the TD-type recurrence.} }
Endnote
%0 Conference Paper %T Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability %A Sergey Samsonov %A Daniil Tiapkin %A Alexey Naumov %A Eric Moulines %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-samsonov24a %I PMLR %P 4511--4547 %U https://proceedings.mlr.press/v247/samsonov24a.html %V 247 %X In this paper we consider the problem of obtaining sharp bounds for the performance of temporal difference (TD) methods with linear function approximation for policy evaluation in discounted Markov decision processes. We show that a simple algorithm with a universal and instance-independent step size together with Polyak-Ruppert tail averaging is sufficient to obtain near-optimal variance and bias terms. We also provide the respective sample complexity bounds. Our proof technique is based on refined error bounds for linear stochastic approximation together with the novel stability result for the product of random matrices that arise from the TD-type recurrence.
APA
Samsonov, S., Tiapkin, D., Naumov, A. & Moulines, E.. (2024). Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:4511-4547 Available from https://proceedings.mlr.press/v247/samsonov24a.html.

Related Material