Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time

Jeongho Kim, Insoon Yang
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:739-748, 2020.

Abstract

In this paper, we introduce Hamilton-Jacobi-Bellman (HJB) equations for Q-functions in continuous time optimal control problems with Lipschitz continuous controls. The standard Q-function used in reinforcement learning is shown to be the unique viscosity solution of the HJB equation. A necessary and sufficient condition for optimality is provided using the viscosity solution framework. By using the HJB equation, we develop a Q-learning method for continuous-time dynamical systems. A DQN-like algorithm is also proposed for high-dimensional state and control spaces. The performance of the proposed Q-learning algorithm is demonstrated using 1-, 10- and 20-dimensional dynamical systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-kim20b, title = {Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time}, author = {Kim, Jeongho and Yang, Insoon}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {739--748}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/kim20b/kim20b.pdf}, url = {https://proceedings.mlr.press/v120/kim20b.html}, abstract = {In this paper, we introduce Hamilton-Jacobi-Bellman (HJB) equations for Q-functions in continuous time optimal control problems with Lipschitz continuous controls. The standard Q-function used in reinforcement learning is shown to be the unique viscosity solution of the HJB equation. A necessary and sufficient condition for optimality is provided using the viscosity solution framework. By using the HJB equation, we develop a Q-learning method for continuous-time dynamical systems. A DQN-like algorithm is also proposed for high-dimensional state and control spaces. The performance of the proposed Q-learning algorithm is demonstrated using 1-, 10- and 20-dimensional dynamical systems.} }
Endnote
%0 Conference Paper %T Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time %A Jeongho Kim %A Insoon Yang %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-kim20b %I PMLR %P 739--748 %U https://proceedings.mlr.press/v120/kim20b.html %V 120 %X In this paper, we introduce Hamilton-Jacobi-Bellman (HJB) equations for Q-functions in continuous time optimal control problems with Lipschitz continuous controls. The standard Q-function used in reinforcement learning is shown to be the unique viscosity solution of the HJB equation. A necessary and sufficient condition for optimality is provided using the viscosity solution framework. By using the HJB equation, we develop a Q-learning method for continuous-time dynamical systems. A DQN-like algorithm is also proposed for high-dimensional state and control spaces. The performance of the proposed Q-learning algorithm is demonstrated using 1-, 10- and 20-dimensional dynamical systems.
APA
Kim, J. & Yang, I.. (2020). Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:739-748 Available from https://proceedings.mlr.press/v120/kim20b.html.

Related Material