Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions

Noah Golowich, Ankur Moitra
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1939-1981, 2024.

Abstract

One of the most natural approaches to reinforcement learning (RL) with function approximation is value iteration, which inductively generates approximations to the optimal value function by solving a sequence of regression problems. To ensure the success of value iteration, it is typically assumed that Bellman completeness holds, which ensures that these regression problems are well- specified. We study the problem of learning an optimal policy under Bellman completeness in the online model of RL with linear function approximation. In the linear setting, while statistically efficient algorithms are known under Bellman completeness (e.g., (Jiang et al., 2017; Zanette et al., 2020a)), these algorithms all rely on the principle of global optimism which requires solving a nonconvex optimization problem. In particular, it has remained open as to whether computationally efficient algorithms exist. In this paper we give the first polynomial-time algorithm for RL under linear Bellman completeness when the number of actions is any constant.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-golowich24a, title = {Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions}, author = {Golowich, Noah and Moitra, Ankur}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {1939--1981}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/golowich24a/golowich24a.pdf}, url = {https://proceedings.mlr.press/v247/golowich24a.html}, abstract = {One of the most natural approaches to reinforcement learning (RL) with function approximation is value iteration, which inductively generates approximations to the optimal value function by solving a sequence of regression problems. To ensure the success of value iteration, it is typically assumed that Bellman completeness holds, which ensures that these regression problems are well- specified. We study the problem of learning an optimal policy under Bellman completeness in the online model of RL with linear function approximation. In the linear setting, while statistically efficient algorithms are known under Bellman completeness (e.g., (Jiang et al., 2017; Zanette et al., 2020a)), these algorithms all rely on the principle of global optimism which requires solving a nonconvex optimization problem. In particular, it has remained open as to whether computationally efficient algorithms exist. In this paper we give the first polynomial-time algorithm for RL under linear Bellman completeness when the number of actions is any constant.} }
Endnote
%0 Conference Paper %T Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions %A Noah Golowich %A Ankur Moitra %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-golowich24a %I PMLR %P 1939--1981 %U https://proceedings.mlr.press/v247/golowich24a.html %V 247 %X One of the most natural approaches to reinforcement learning (RL) with function approximation is value iteration, which inductively generates approximations to the optimal value function by solving a sequence of regression problems. To ensure the success of value iteration, it is typically assumed that Bellman completeness holds, which ensures that these regression problems are well- specified. We study the problem of learning an optimal policy under Bellman completeness in the online model of RL with linear function approximation. In the linear setting, while statistically efficient algorithms are known under Bellman completeness (e.g., (Jiang et al., 2017; Zanette et al., 2020a)), these algorithms all rely on the principle of global optimism which requires solving a nonconvex optimization problem. In particular, it has remained open as to whether computationally efficient algorithms exist. In this paper we give the first polynomial-time algorithm for RL under linear Bellman completeness when the number of actions is any constant.
APA
Golowich, N. & Moitra, A.. (2024). Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:1939-1981 Available from https://proceedings.mlr.press/v247/golowich24a.html.

Related Material