Approximate Distributionally Robust Nonlinear Optimization with Application to Model Predictive Control: A Functional Approach

Yassine Nemmour, Bernhard Schölkopf, Jia-Jie Zhu
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:1255-1269, 2021.

Abstract

We provide a functional view of distributional robustness motivated by robust statistics and functional analysis. This results in two practical computational approaches for approximate distribution-ally robust nonlinear optimization based on gradient norms and reproducing kernel Hilbert spaces. Our method can be applied to the settings of statistical learning with small sample size and test distribution shift. As a case study, we robustify scenario-based stochastic model predictive control with general nonlinear constraints. In particular, we demonstrate constraint satisfaction with only a small number of scenarios under distribution shift.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-nemmour21a, title = {Approximate Distributionally Robust Nonlinear Optimization with Application to Model Predictive Control: A Functional Approach}, author = {Nemmour, Yassine and Sch\"olkopf, Bernhard and Zhu, Jia-Jie}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {1255--1269}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/nemmour21a/nemmour21a.pdf}, url = {https://proceedings.mlr.press/v144/nemmour21a.html}, abstract = {We provide a functional view of distributional robustness motivated by robust statistics and functional analysis. This results in two practical computational approaches for approximate distribution-ally robust nonlinear optimization based on gradient norms and reproducing kernel Hilbert spaces. Our method can be applied to the settings of statistical learning with small sample size and test distribution shift. As a case study, we robustify scenario-based stochastic model predictive control with general nonlinear constraints. In particular, we demonstrate constraint satisfaction with only a small number of scenarios under distribution shift.} }
Endnote
%0 Conference Paper %T Approximate Distributionally Robust Nonlinear Optimization with Application to Model Predictive Control: A Functional Approach %A Yassine Nemmour %A Bernhard Schölkopf %A Jia-Jie Zhu %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-nemmour21a %I PMLR %P 1255--1269 %U https://proceedings.mlr.press/v144/nemmour21a.html %V 144 %X We provide a functional view of distributional robustness motivated by robust statistics and functional analysis. This results in two practical computational approaches for approximate distribution-ally robust nonlinear optimization based on gradient norms and reproducing kernel Hilbert spaces. Our method can be applied to the settings of statistical learning with small sample size and test distribution shift. As a case study, we robustify scenario-based stochastic model predictive control with general nonlinear constraints. In particular, we demonstrate constraint satisfaction with only a small number of scenarios under distribution shift.
APA
Nemmour, Y., Schölkopf, B. & Zhu, J.. (2021). Approximate Distributionally Robust Nonlinear Optimization with Application to Model Predictive Control: A Functional Approach. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:1255-1269 Available from https://proceedings.mlr.press/v144/nemmour21a.html.

Related Material