Optimal Control with Learning on the Fly: System with Unknown Drift

Daniel Gurevich, Debdipta Goswami, Charles L. Fefferman, Clarence W. Rowley
Proceedings of The 4th Annual Learning for Dynamics and Control Conference, PMLR 168:870-880, 2022.

Abstract

This paper derives an optimal control strategy for a simple stochastic dynamical system with constant drift and an additive control input. Motivated by the example of a physical system with an unexpected change in its dynamics, we take the drift parameter to be unknown, so that it must be learned while controlling the system. The state of the system is observed through a linear observation model with Gaussian noise. In contrast to most previous work, which focuses on a controller’s asymptotic performance over an infinite time horizon, we minimize a quadratic cost function over a finite time horizon. The performance of our control strategy is quantified by comparing its cost with the cost incurred by an optimal controller that has full knowledge of the parameters. This approach gives rise to several notions of “regret.” We derive a set of control strategies that provably minimize the worst-case regret, which arise from Bayesian strategies that assume a specific fixed prior on the drift parameter. This work suggests that examining Bayesian strategies may lead to optimal or near-optimal control strategies for a much larger class of realistic dynamical models with unknown parameters.

Cite this Paper


BibTeX
@InProceedings{pmlr-v168-gurevich22a, title = {Optimal Control with Learning on the Fly: System with Unknown Drift}, author = {Gurevich, Daniel and Goswami, Debdipta and Fefferman, Charles L. and Rowley, Clarence W.}, booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference}, pages = {870--880}, year = {2022}, editor = {Firoozi, Roya and Mehr, Negar and Yel, Esen and Antonova, Rika and Bohg, Jeannette and Schwager, Mac and Kochenderfer, Mykel}, volume = {168}, series = {Proceedings of Machine Learning Research}, month = {23--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v168/gurevich22a/gurevich22a.pdf}, url = {https://proceedings.mlr.press/v168/gurevich22a.html}, abstract = {This paper derives an optimal control strategy for a simple stochastic dynamical system with constant drift and an additive control input. Motivated by the example of a physical system with an unexpected change in its dynamics, we take the drift parameter to be unknown, so that it must be learned while controlling the system. The state of the system is observed through a linear observation model with Gaussian noise. In contrast to most previous work, which focuses on a controller’s asymptotic performance over an infinite time horizon, we minimize a quadratic cost function over a finite time horizon. The performance of our control strategy is quantified by comparing its cost with the cost incurred by an optimal controller that has full knowledge of the parameters. This approach gives rise to several notions of “regret.” We derive a set of control strategies that provably minimize the worst-case regret, which arise from Bayesian strategies that assume a specific fixed prior on the drift parameter. This work suggests that examining Bayesian strategies may lead to optimal or near-optimal control strategies for a much larger class of realistic dynamical models with unknown parameters. } }
Endnote
%0 Conference Paper %T Optimal Control with Learning on the Fly: System with Unknown Drift %A Daniel Gurevich %A Debdipta Goswami %A Charles L. Fefferman %A Clarence W. Rowley %B Proceedings of The 4th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2022 %E Roya Firoozi %E Negar Mehr %E Esen Yel %E Rika Antonova %E Jeannette Bohg %E Mac Schwager %E Mykel Kochenderfer %F pmlr-v168-gurevich22a %I PMLR %P 870--880 %U https://proceedings.mlr.press/v168/gurevich22a.html %V 168 %X This paper derives an optimal control strategy for a simple stochastic dynamical system with constant drift and an additive control input. Motivated by the example of a physical system with an unexpected change in its dynamics, we take the drift parameter to be unknown, so that it must be learned while controlling the system. The state of the system is observed through a linear observation model with Gaussian noise. In contrast to most previous work, which focuses on a controller’s asymptotic performance over an infinite time horizon, we minimize a quadratic cost function over a finite time horizon. The performance of our control strategy is quantified by comparing its cost with the cost incurred by an optimal controller that has full knowledge of the parameters. This approach gives rise to several notions of “regret.” We derive a set of control strategies that provably minimize the worst-case regret, which arise from Bayesian strategies that assume a specific fixed prior on the drift parameter. This work suggests that examining Bayesian strategies may lead to optimal or near-optimal control strategies for a much larger class of realistic dynamical models with unknown parameters.
APA
Gurevich, D., Goswami, D., Fefferman, C.L. & Rowley, C.W.. (2022). Optimal Control with Learning on the Fly: System with Unknown Drift. Proceedings of The 4th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 168:870-880 Available from https://proceedings.mlr.press/v168/gurevich22a.html.

Related Material