On the Statistical Benefits of Temporal Difference Learning

David Cheikhi, Daniel Russo
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:4269-4293, 2023.

Abstract

Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD’s errors are bounded in terms of a novel measure – the problem’s trajectory crossing time – which can be much smaller than the problem’s time horizon.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-cheikhi23a, title = {On the Statistical Benefits of Temporal Difference Learning}, author = {Cheikhi, David and Russo, Daniel}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {4269--4293}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/cheikhi23a/cheikhi23a.pdf}, url = {https://proceedings.mlr.press/v202/cheikhi23a.html}, abstract = {Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD’s errors are bounded in terms of a novel measure – the problem’s trajectory crossing time – which can be much smaller than the problem’s time horizon.} }
Endnote
%0 Conference Paper %T On the Statistical Benefits of Temporal Difference Learning %A David Cheikhi %A Daniel Russo %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-cheikhi23a %I PMLR %P 4269--4293 %U https://proceedings.mlr.press/v202/cheikhi23a.html %V 202 %X Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive inverse trajectory pooling coefficient completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD’s errors are bounded in terms of a novel measure – the problem’s trajectory crossing time – which can be much smaller than the problem’s time horizon.
APA
Cheikhi, D. & Russo, D.. (2023). On the Statistical Benefits of Temporal Difference Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:4269-4293 Available from https://proceedings.mlr.press/v202/cheikhi23a.html.

Related Material