Uncertainty quantification and robustification of model-based controllers using conformal prediction

Kong Yao Chee, Thales C. Silva, M. Ani Hsieh, George J. Pappas
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:528-540, 2024.

Abstract

In modern model-based control frameworks such as model predictive control or model-based reinforcement learning, machine learning has become a ubiquitous class of techniques deployed to improve the accuracy of the dynamics models. By leveraging expressive architectures such as neural networks, these frameworks aim to improve both the model accuracy and the control performance of the system, through the construction of accurate data-driven representations of the system dynamics. Despite achieving significant performance improvements over their non-learning counterparts, there are often little or no guarantees on how these model-based controllers with learned models would perform in the presence of uncertainty. In particular, under the influence of modeling errors, noise and exogenous disturbances, it is challenging to ascertain the accuracy of these learned models. In some cases, constraints may even be violated, rendering the controllers unsafe. In this work, we propose a novel framework that can be applied to a large class of model-based controllers and alleviates the above mentioned issues by robustifying the model-based controllers in an online and modular manner, with provable guarantees on the model accuracy and constraint satisfaction. The framework first deploys conformal prediction to generate finite-sample, provably valid uncertainty regions for the dynamics model in a distribution-free manner. These uncertainty regions are incorporated into the constraints through a dynamic constraint tightening procedure. Together with the formulation of a predictive reference generator, a set of robustified reference trajectories are generated and incorporated into the model-based controller. Using two practical case studies, we demonstrate that our proposed methodology not only produces well-calibrated uncertainty regions that establish the accuracy of the models, but also enables the closed-loop system to satisfy constraints in a robust yet non-conservative manner.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-chee24a, title = {Uncertainty quantification and robustification of model-based controllers using conformal prediction}, author = {Chee, Kong Yao and Silva, Thales C. and Hsieh, M. Ani and Pappas, George J.}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {528--540}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/chee24a/chee24a.pdf}, url = {https://proceedings.mlr.press/v242/chee24a.html}, abstract = {In modern model-based control frameworks such as model predictive control or model-based reinforcement learning, machine learning has become a ubiquitous class of techniques deployed to improve the accuracy of the dynamics models. By leveraging expressive architectures such as neural networks, these frameworks aim to improve both the model accuracy and the control performance of the system, through the construction of accurate data-driven representations of the system dynamics. Despite achieving significant performance improvements over their non-learning counterparts, there are often little or no guarantees on how these model-based controllers with learned models would perform in the presence of uncertainty. In particular, under the influence of modeling errors, noise and exogenous disturbances, it is challenging to ascertain the accuracy of these learned models. In some cases, constraints may even be violated, rendering the controllers unsafe. In this work, we propose a novel framework that can be applied to a large class of model-based controllers and alleviates the above mentioned issues by robustifying the model-based controllers in an online and modular manner, with provable guarantees on the model accuracy and constraint satisfaction. The framework first deploys conformal prediction to generate finite-sample, provably valid uncertainty regions for the dynamics model in a distribution-free manner. These uncertainty regions are incorporated into the constraints through a dynamic constraint tightening procedure. Together with the formulation of a predictive reference generator, a set of robustified reference trajectories are generated and incorporated into the model-based controller. Using two practical case studies, we demonstrate that our proposed methodology not only produces well-calibrated uncertainty regions that establish the accuracy of the models, but also enables the closed-loop system to satisfy constraints in a robust yet non-conservative manner.} }
Endnote
%0 Conference Paper %T Uncertainty quantification and robustification of model-based controllers using conformal prediction %A Kong Yao Chee %A Thales C. Silva %A M. Ani Hsieh %A George J. Pappas %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-chee24a %I PMLR %P 528--540 %U https://proceedings.mlr.press/v242/chee24a.html %V 242 %X In modern model-based control frameworks such as model predictive control or model-based reinforcement learning, machine learning has become a ubiquitous class of techniques deployed to improve the accuracy of the dynamics models. By leveraging expressive architectures such as neural networks, these frameworks aim to improve both the model accuracy and the control performance of the system, through the construction of accurate data-driven representations of the system dynamics. Despite achieving significant performance improvements over their non-learning counterparts, there are often little or no guarantees on how these model-based controllers with learned models would perform in the presence of uncertainty. In particular, under the influence of modeling errors, noise and exogenous disturbances, it is challenging to ascertain the accuracy of these learned models. In some cases, constraints may even be violated, rendering the controllers unsafe. In this work, we propose a novel framework that can be applied to a large class of model-based controllers and alleviates the above mentioned issues by robustifying the model-based controllers in an online and modular manner, with provable guarantees on the model accuracy and constraint satisfaction. The framework first deploys conformal prediction to generate finite-sample, provably valid uncertainty regions for the dynamics model in a distribution-free manner. These uncertainty regions are incorporated into the constraints through a dynamic constraint tightening procedure. Together with the formulation of a predictive reference generator, a set of robustified reference trajectories are generated and incorporated into the model-based controller. Using two practical case studies, we demonstrate that our proposed methodology not only produces well-calibrated uncertainty regions that establish the accuracy of the models, but also enables the closed-loop system to satisfy constraints in a robust yet non-conservative manner.
APA
Chee, K.Y., Silva, T.C., Hsieh, M.A. & Pappas, G.J.. (2024). Uncertainty quantification and robustification of model-based controllers using conformal prediction. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:528-540 Available from https://proceedings.mlr.press/v242/chee24a.html.

Related Material