Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models

Tzu-Yuan Huang, Armin Lederer, Nicolas Hoischen, Jan Brudigam, Xuehua Xiao, Stefan Sosnowski, Sandra Hirche
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:777-790, 2025.

Abstract

Achieving global optimality in nonlinear model predictive control (NMPC) is challenging due to the non-convex nature of the underlying optimization problem. Since commonly employed local optimization techniques depend on carefully chosen initial guesses, this non-convexity often leads to suboptimal performance resulting from local optima. To overcome this limitation, we propose a novel diffusion model-based approach for near-globally optimal NMPC consisting of an offline and an online phase. The offline phase employs a local optimizer to sample from the distribution of optimal NMPC control sequences along generated system trajectories through random initial guesses. Subsequently, the generated diverse dataset is used to train a diffusion model to reflect the multi-modal distribution of optima. In the online phase, the trained model is leveraged to efficiently perform a variant of random shooting optimization to obtain near-globally optimal control sequences without relying on any initial guesses or online NMPC solving. The effectiveness of our approach is illustrated in a numerical simulation indicating high performance benefits compared to direct neural network approximations of NMPC and significantly lower computation times than online solving NMPC using global optimizers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-huang25a, title = {Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models}, author = {Huang, Tzu-Yuan and Lederer, Armin and Hoischen, Nicolas and Brudigam, Jan and Xiao, Xuehua and Sosnowski, Stefan and Hirche, Sandra}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {777--790}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/huang25a/huang25a.pdf}, url = {https://proceedings.mlr.press/v283/huang25a.html}, abstract = {Achieving global optimality in nonlinear model predictive control (NMPC) is challenging due to the non-convex nature of the underlying optimization problem. Since commonly employed local optimization techniques depend on carefully chosen initial guesses, this non-convexity often leads to suboptimal performance resulting from local optima. To overcome this limitation, we propose a novel diffusion model-based approach for near-globally optimal NMPC consisting of an offline and an online phase. The offline phase employs a local optimizer to sample from the distribution of optimal NMPC control sequences along generated system trajectories through random initial guesses. Subsequently, the generated diverse dataset is used to train a diffusion model to reflect the multi-modal distribution of optima. In the online phase, the trained model is leveraged to efficiently perform a variant of random shooting optimization to obtain near-globally optimal control sequences without relying on any initial guesses or online NMPC solving. The effectiveness of our approach is illustrated in a numerical simulation indicating high performance benefits compared to direct neural network approximations of NMPC and significantly lower computation times than online solving NMPC using global optimizers.} }
Endnote
%0 Conference Paper %T Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models %A Tzu-Yuan Huang %A Armin Lederer %A Nicolas Hoischen %A Jan Brudigam %A Xuehua Xiao %A Stefan Sosnowski %A Sandra Hirche %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-huang25a %I PMLR %P 777--790 %U https://proceedings.mlr.press/v283/huang25a.html %V 283 %X Achieving global optimality in nonlinear model predictive control (NMPC) is challenging due to the non-convex nature of the underlying optimization problem. Since commonly employed local optimization techniques depend on carefully chosen initial guesses, this non-convexity often leads to suboptimal performance resulting from local optima. To overcome this limitation, we propose a novel diffusion model-based approach for near-globally optimal NMPC consisting of an offline and an online phase. The offline phase employs a local optimizer to sample from the distribution of optimal NMPC control sequences along generated system trajectories through random initial guesses. Subsequently, the generated diverse dataset is used to train a diffusion model to reflect the multi-modal distribution of optima. In the online phase, the trained model is leveraged to efficiently perform a variant of random shooting optimization to obtain near-globally optimal control sequences without relying on any initial guesses or online NMPC solving. The effectiveness of our approach is illustrated in a numerical simulation indicating high performance benefits compared to direct neural network approximations of NMPC and significantly lower computation times than online solving NMPC using global optimizers.
APA
Huang, T., Lederer, A., Hoischen, N., Brudigam, J., Xiao, X., Sosnowski, S. & Hirche, S.. (2025). Toward Near-Globally Optimal Nonlinear Model Predictive Control via Diffusion Models. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:777-790 Available from https://proceedings.mlr.press/v283/huang25a.html.

Related Material