Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control

Katie Kang, Paula Gradu, Jason J Choi, Michael Janner, Claire Tomlin, Sergey Levine
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10708-10733, 2022.

Abstract

Learned models and policies can generalize effectively when evaluated within the distribution of the training data, but can produce unpredictable and erroneous outputs on out-of-distribution inputs. In order to avoid distribution shift when deploying learning-based control algorithms, we seek a mechanism to constrain the agent to states and actions that resemble those that the method was trained on. In control theory, Lyapunov stability and control-invariant sets allow us to make guarantees about controllers that stabilize the system around specific states, while in machine learning, density models allow us to estimate the training data distribution. Can we combine these two concepts, producing learning-based control algorithms that constrain the system to in-distribution states using only in-distribution actions? In this paper, we propose to do this by combining concepts from Lyapunov stability and density estimation, introducing Lyapunov density models: a generalization of control Lyapunov functions and density models that provides guarantees about an agent’s ability to stay in-distribution over its entire trajectory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kang22a, title = {{L}yapunov Density Models: Constraining Distribution Shift in Learning-Based Control}, author = {Kang, Katie and Gradu, Paula and Choi, Jason J and Janner, Michael and Tomlin, Claire and Levine, Sergey}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10708--10733}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kang22a/kang22a.pdf}, url = {https://proceedings.mlr.press/v162/kang22a.html}, abstract = {Learned models and policies can generalize effectively when evaluated within the distribution of the training data, but can produce unpredictable and erroneous outputs on out-of-distribution inputs. In order to avoid distribution shift when deploying learning-based control algorithms, we seek a mechanism to constrain the agent to states and actions that resemble those that the method was trained on. In control theory, Lyapunov stability and control-invariant sets allow us to make guarantees about controllers that stabilize the system around specific states, while in machine learning, density models allow us to estimate the training data distribution. Can we combine these two concepts, producing learning-based control algorithms that constrain the system to in-distribution states using only in-distribution actions? In this paper, we propose to do this by combining concepts from Lyapunov stability and density estimation, introducing Lyapunov density models: a generalization of control Lyapunov functions and density models that provides guarantees about an agent’s ability to stay in-distribution over its entire trajectory.} }
Endnote
%0 Conference Paper %T Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control %A Katie Kang %A Paula Gradu %A Jason J Choi %A Michael Janner %A Claire Tomlin %A Sergey Levine %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kang22a %I PMLR %P 10708--10733 %U https://proceedings.mlr.press/v162/kang22a.html %V 162 %X Learned models and policies can generalize effectively when evaluated within the distribution of the training data, but can produce unpredictable and erroneous outputs on out-of-distribution inputs. In order to avoid distribution shift when deploying learning-based control algorithms, we seek a mechanism to constrain the agent to states and actions that resemble those that the method was trained on. In control theory, Lyapunov stability and control-invariant sets allow us to make guarantees about controllers that stabilize the system around specific states, while in machine learning, density models allow us to estimate the training data distribution. Can we combine these two concepts, producing learning-based control algorithms that constrain the system to in-distribution states using only in-distribution actions? In this paper, we propose to do this by combining concepts from Lyapunov stability and density estimation, introducing Lyapunov density models: a generalization of control Lyapunov functions and density models that provides guarantees about an agent’s ability to stay in-distribution over its entire trajectory.
APA
Kang, K., Gradu, P., Choi, J.J., Janner, M., Tomlin, C. & Levine, S.. (2022). Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10708-10733 Available from https://proceedings.mlr.press/v162/kang22a.html.

Related Material