Online Reinforcement Learning in Stochastic Continuous-Time Systems

Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:612-656, 2023.

Abstract

Linear dynamical systems that obey stochastic differential equations are canonical models. While optimal control of known systems has a rich literature, the problem is technically hard under model uncertainty and there are hardly any such result. We initiate study of this problem and aim to learn (and simultaneously deploy) optimal actions for minimizing a quadratic cost function. Indeed, this work is the first that comprehensively addresses the crucial challenge of balancing exploration versus exploitation in continuous-time systems. We present online policies that learn optimal actions fast by carefully randomizing the parameter estimates, and establish their performance guarantees: a regret bound that grows with square-root of time multiplied by the number of parameters. Implementation of the policy for a flight-control task demonstrates its efficacy. Further, we prove sharp stability results for inexact system dynamics and tightly specify the infinitesimal regret caused by sub-optimal actions. To obtain the results, we conduct a novel eigenvalue-sensitivity analysis for matrix perturbation, establish upper-bounds for comparative ratios of stochastic integrals, and introduce the new method of policy differentiation. Our analysis sheds light on fundamental challenges in continuous-time reinforcement learning and suggests a useful cornerstone for similar problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-shirani-faradonbeh23a, title = {Online Reinforcement Learning in Stochastic Continuous-Time Systems}, author = {Shirani Faradonbeh, Mohamad Kazem and Shirani Faradonbeh, Mohamad Sadegh}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {612--656}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/shirani-faradonbeh23a/shirani-faradonbeh23a.pdf}, url = {https://proceedings.mlr.press/v195/shirani-faradonbeh23a.html}, abstract = {Linear dynamical systems that obey stochastic differential equations are canonical models. While optimal control of known systems has a rich literature, the problem is technically hard under model uncertainty and there are hardly any such result. We initiate study of this problem and aim to learn (and simultaneously deploy) optimal actions for minimizing a quadratic cost function. Indeed, this work is the first that comprehensively addresses the crucial challenge of balancing exploration versus exploitation in continuous-time systems. We present online policies that learn optimal actions fast by carefully randomizing the parameter estimates, and establish their performance guarantees: a regret bound that grows with square-root of time multiplied by the number of parameters. Implementation of the policy for a flight-control task demonstrates its efficacy. Further, we prove sharp stability results for inexact system dynamics and tightly specify the infinitesimal regret caused by sub-optimal actions. To obtain the results, we conduct a novel eigenvalue-sensitivity analysis for matrix perturbation, establish upper-bounds for comparative ratios of stochastic integrals, and introduce the new method of policy differentiation. Our analysis sheds light on fundamental challenges in continuous-time reinforcement learning and suggests a useful cornerstone for similar problems.} }
Endnote
%0 Conference Paper %T Online Reinforcement Learning in Stochastic Continuous-Time Systems %A Mohamad Kazem Shirani Faradonbeh %A Mohamad Sadegh Shirani Faradonbeh %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-shirani-faradonbeh23a %I PMLR %P 612--656 %U https://proceedings.mlr.press/v195/shirani-faradonbeh23a.html %V 195 %X Linear dynamical systems that obey stochastic differential equations are canonical models. While optimal control of known systems has a rich literature, the problem is technically hard under model uncertainty and there are hardly any such result. We initiate study of this problem and aim to learn (and simultaneously deploy) optimal actions for minimizing a quadratic cost function. Indeed, this work is the first that comprehensively addresses the crucial challenge of balancing exploration versus exploitation in continuous-time systems. We present online policies that learn optimal actions fast by carefully randomizing the parameter estimates, and establish their performance guarantees: a regret bound that grows with square-root of time multiplied by the number of parameters. Implementation of the policy for a flight-control task demonstrates its efficacy. Further, we prove sharp stability results for inexact system dynamics and tightly specify the infinitesimal regret caused by sub-optimal actions. To obtain the results, we conduct a novel eigenvalue-sensitivity analysis for matrix perturbation, establish upper-bounds for comparative ratios of stochastic integrals, and introduce the new method of policy differentiation. Our analysis sheds light on fundamental challenges in continuous-time reinforcement learning and suggests a useful cornerstone for similar problems.
APA
Shirani Faradonbeh, M.K. & Shirani Faradonbeh, M.S.. (2023). Online Reinforcement Learning in Stochastic Continuous-Time Systems. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:612-656 Available from https://proceedings.mlr.press/v195/shirani-faradonbeh23a.html.

Related Material