Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement Learning

Sattar Vakili
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5340-5344, 2024.

Abstract

Reinforcement Learning (RL) has shown great empirical success in various application domains. The theoretical aspects of the problem have been extensively studied over past decades, particularly under tabular and linear Markov Decision Process structures. Recently, non-linear function approximation using kernel-based prediction has gained traction. This approach is particularly interesting as it naturally extends the linear structure, and helps explain the behavior of neural-network-based models at their infinite width limit. The analytical results however do not adequately address the performance guarantees for this case. We will highlight this open problem, overview existing partial results, and discuss related challenges.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-vakili24a, title = {Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement Learning}, author = {Vakili, Sattar}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {5340--5344}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/vakili24a/vakili24a.pdf}, url = {https://proceedings.mlr.press/v247/vakili24a.html}, abstract = {Reinforcement Learning (RL) has shown great empirical success in various application domains. The theoretical aspects of the problem have been extensively studied over past decades, particularly under tabular and linear Markov Decision Process structures. Recently, non-linear function approximation using kernel-based prediction has gained traction. This approach is particularly interesting as it naturally extends the linear structure, and helps explain the behavior of neural-network-based models at their infinite width limit. The analytical results however do not adequately address the performance guarantees for this case. We will highlight this open problem, overview existing partial results, and discuss related challenges.} }
Endnote
%0 Conference Paper %T Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement Learning %A Sattar Vakili %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-vakili24a %I PMLR %P 5340--5344 %U https://proceedings.mlr.press/v247/vakili24a.html %V 247 %X Reinforcement Learning (RL) has shown great empirical success in various application domains. The theoretical aspects of the problem have been extensively studied over past decades, particularly under tabular and linear Markov Decision Process structures. Recently, non-linear function approximation using kernel-based prediction has gained traction. This approach is particularly interesting as it naturally extends the linear structure, and helps explain the behavior of neural-network-based models at their infinite width limit. The analytical results however do not adequately address the performance guarantees for this case. We will highlight this open problem, overview existing partial results, and discuss related challenges.
APA
Vakili, S.. (2024). Open Problem: Order Optimal Regret Bounds for Kernel-Based Reinforcement Learning. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:5340-5344 Available from https://proceedings.mlr.press/v247/vakili24a.html.

Related Material