SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation

Bo Dai, Albert Shaw, Lihong Li, Lin Xiao, Niao He, Zhen Liu, Jianshu Chen, Le Song
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1125-1134, 2018.

Abstract

When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov’s smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding, to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm’s sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-dai18c, title = {{SBEED}: Convergent Reinforcement Learning with Nonlinear Function Approximation}, author = {Dai, Bo and Shaw, Albert and Li, Lihong and Xiao, Lin and He, Niao and Liu, Zhen and Chen, Jianshu and Song, Le}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1125--1134}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/dai18c/dai18c.pdf}, url = {https://proceedings.mlr.press/v80/dai18c.html}, abstract = {When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov’s smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding, to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm’s sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems.} }
Endnote
%0 Conference Paper %T SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation %A Bo Dai %A Albert Shaw %A Lihong Li %A Lin Xiao %A Niao He %A Zhen Liu %A Jianshu Chen %A Le Song %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-dai18c %I PMLR %P 1125--1134 %U https://proceedings.mlr.press/v80/dai18c.html %V 80 %X When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov’s smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding, to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm’s sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems.
APA
Dai, B., Shaw, A., Li, L., Xiao, L., He, N., Liu, Z., Chen, J. & Song, L.. (2018). SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1125-1134 Available from https://proceedings.mlr.press/v80/dai18c.html.

Related Material