Krylov–Bellman boosting: Super-linear policy evaluation in general state spaces

Eric Xia, Martin Wainwright
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:9137-9166, 2023.

Abstract

We present and analyze the Krylov–Bellman Boosting algorithm for policy evaluation in general state spaces. It alternates between fitting the Bellman residual using non-parametric regression (as in boosting), and estimating the value function via the least-squares temporal difference (LSTD) procedure applied with a feature set that grows adaptively over time. By exploiting the connection to Krylov methods, we equip this method with two attractive guarantees. First, we provide a general convergence bound that allows for separate estimation errors in residual fitting and LSTD computation. Consistent with our numerical experiments, this bound shows that convergence rates depend on the restricted spectral structure, and are typically super-linear. Second, by combining this meta-result with sample-size dependent guarantees for residual fitting and LTSD computation, we obtain concrete statistical guarantees that depend on the sample size along with the complexity of the function class used to fit the residuals. We illustrate the behavior of the KBB algorithm for various types of policy evaluation problems, and typically find large reductions in sample complexity relative to the standard approach of fitted value iteration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-xia23a, title = {Krylov–Bellman boosting: Super-linear policy evaluation in general state spaces}, author = {Xia, Eric and Wainwright, Martin}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {9137--9166}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/xia23a/xia23a.pdf}, url = {https://proceedings.mlr.press/v206/xia23a.html}, abstract = {We present and analyze the Krylov–Bellman Boosting algorithm for policy evaluation in general state spaces. It alternates between fitting the Bellman residual using non-parametric regression (as in boosting), and estimating the value function via the least-squares temporal difference (LSTD) procedure applied with a feature set that grows adaptively over time. By exploiting the connection to Krylov methods, we equip this method with two attractive guarantees. First, we provide a general convergence bound that allows for separate estimation errors in residual fitting and LSTD computation. Consistent with our numerical experiments, this bound shows that convergence rates depend on the restricted spectral structure, and are typically super-linear. Second, by combining this meta-result with sample-size dependent guarantees for residual fitting and LTSD computation, we obtain concrete statistical guarantees that depend on the sample size along with the complexity of the function class used to fit the residuals. We illustrate the behavior of the KBB algorithm for various types of policy evaluation problems, and typically find large reductions in sample complexity relative to the standard approach of fitted value iteration.} }
Endnote
%0 Conference Paper %T Krylov–Bellman boosting: Super-linear policy evaluation in general state spaces %A Eric Xia %A Martin Wainwright %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-xia23a %I PMLR %P 9137--9166 %U https://proceedings.mlr.press/v206/xia23a.html %V 206 %X We present and analyze the Krylov–Bellman Boosting algorithm for policy evaluation in general state spaces. It alternates between fitting the Bellman residual using non-parametric regression (as in boosting), and estimating the value function via the least-squares temporal difference (LSTD) procedure applied with a feature set that grows adaptively over time. By exploiting the connection to Krylov methods, we equip this method with two attractive guarantees. First, we provide a general convergence bound that allows for separate estimation errors in residual fitting and LSTD computation. Consistent with our numerical experiments, this bound shows that convergence rates depend on the restricted spectral structure, and are typically super-linear. Second, by combining this meta-result with sample-size dependent guarantees for residual fitting and LTSD computation, we obtain concrete statistical guarantees that depend on the sample size along with the complexity of the function class used to fit the residuals. We illustrate the behavior of the KBB algorithm for various types of policy evaluation problems, and typically find large reductions in sample complexity relative to the standard approach of fitted value iteration.
APA
Xia, E. & Wainwright, M.. (2023). Krylov–Bellman boosting: Super-linear policy evaluation in general state spaces. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:9137-9166 Available from https://proceedings.mlr.press/v206/xia23a.html.

Related Material