Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems

Joao Paulo Jansch-Porto, Bin Hu, Geir Dullerud
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:947-957, 2020.

Abstract

Markovian jump linear systems (MJLS) are an important class of dynamical systems that arise in many control applications. In this paper, we introduce the problem of controlling unknown MJLS as a new reinforcement learning benchmark for Markov decision processes with mixed continuous/discrete state variables. Compared with the traditional linear quadratic regulator (LQR), our proposed problem leads to a special hybrid MDP (with mixed continuous and discrete variables) and poses significant new challenges due to the appearance of an underlying Markov jump parameter governing the mode of the system dynamics. Specifically, the state of a MJLS does not form a Markov chain and hence one cannot study the MJLS control problem as a MDP with solely continuous state variable. However, one can augment the state and the jump parameter to obtain a MDP with a mixed continuous-discrete state space. We discuss how control theory sheds light on the policy parameterization of such hybrid MDPs. Using a recently developed policy gradient results for MJLS, we show that we can use data-driven methods to solve the discounted cost version of the LQR problem. We modify the widely used natural policy gradient method to directly learn the optimal state feedback control policy for MJLS without identifying either the system dynamics or the transition probability of the switching parameter. We implement the (data-driven) natural policy gradient method on different MJLS examples. Our simulation results suggest that the natural gradient method can efficiently learn the optimal controller for MJLS with unknown dynamics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v120-jansch-porto20a, title = {Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems}, author = {Jansch-Porto, Joao Paulo and Hu, Bin and Dullerud, Geir}, booktitle = {Proceedings of the 2nd Conference on Learning for Dynamics and Control}, pages = {947--957}, year = {2020}, editor = {Bayen, Alexandre M. and Jadbabaie, Ali and Pappas, George and Parrilo, Pablo A. and Recht, Benjamin and Tomlin, Claire and Zeilinger, Melanie}, volume = {120}, series = {Proceedings of Machine Learning Research}, month = {10--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v120/jansch-porto20a/jansch-porto20a.pdf}, url = {https://proceedings.mlr.press/v120/jansch-porto20a.html}, abstract = {Markovian jump linear systems (MJLS) are an important class of dynamical systems that arise in many control applications. In this paper, we introduce the problem of controlling unknown MJLS as a new reinforcement learning benchmark for Markov decision processes with mixed continuous/discrete state variables. Compared with the traditional linear quadratic regulator (LQR), our proposed problem leads to a special hybrid MDP (with mixed continuous and discrete variables) and poses significant new challenges due to the appearance of an underlying Markov jump parameter governing the mode of the system dynamics. Specifically, the state of a MJLS does not form a Markov chain and hence one cannot study the MJLS control problem as a MDP with solely continuous state variable. However, one can augment the state and the jump parameter to obtain a MDP with a mixed continuous-discrete state space. We discuss how control theory sheds light on the policy parameterization of such hybrid MDPs. Using a recently developed policy gradient results for MJLS, we show that we can use data-driven methods to solve the discounted cost version of the LQR problem. We modify the widely used natural policy gradient method to directly learn the optimal state feedback control policy for MJLS without identifying either the system dynamics or the transition probability of the switching parameter. We implement the (data-driven) natural policy gradient method on different MJLS examples. Our simulation results suggest that the natural gradient method can efficiently learn the optimal controller for MJLS with unknown dynamics. } }
Endnote
%0 Conference Paper %T Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems %A Joao Paulo Jansch-Porto %A Bin Hu %A Geir Dullerud %B Proceedings of the 2nd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2020 %E Alexandre M. Bayen %E Ali Jadbabaie %E George Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire Tomlin %E Melanie Zeilinger %F pmlr-v120-jansch-porto20a %I PMLR %P 947--957 %U https://proceedings.mlr.press/v120/jansch-porto20a.html %V 120 %X Markovian jump linear systems (MJLS) are an important class of dynamical systems that arise in many control applications. In this paper, we introduce the problem of controlling unknown MJLS as a new reinforcement learning benchmark for Markov decision processes with mixed continuous/discrete state variables. Compared with the traditional linear quadratic regulator (LQR), our proposed problem leads to a special hybrid MDP (with mixed continuous and discrete variables) and poses significant new challenges due to the appearance of an underlying Markov jump parameter governing the mode of the system dynamics. Specifically, the state of a MJLS does not form a Markov chain and hence one cannot study the MJLS control problem as a MDP with solely continuous state variable. However, one can augment the state and the jump parameter to obtain a MDP with a mixed continuous-discrete state space. We discuss how control theory sheds light on the policy parameterization of such hybrid MDPs. Using a recently developed policy gradient results for MJLS, we show that we can use data-driven methods to solve the discounted cost version of the LQR problem. We modify the widely used natural policy gradient method to directly learn the optimal state feedback control policy for MJLS without identifying either the system dynamics or the transition probability of the switching parameter. We implement the (data-driven) natural policy gradient method on different MJLS examples. Our simulation results suggest that the natural gradient method can efficiently learn the optimal controller for MJLS with unknown dynamics.
APA
Jansch-Porto, J.P., Hu, B. & Dullerud, G.. (2020). Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems. Proceedings of the 2nd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 120:947-957 Available from https://proceedings.mlr.press/v120/jansch-porto20a.html.

Related Material