Analytical Integral Global Optimization

Sebastien Labbe, Andrea Del Prete
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:711-722, 2025.

Abstract

Numerical optimization has been the workhorse powering the success of many machine learning and artificial intelligence tools over the last decade. However, current state-of-the-art algorithms for solving unconstrained non-convex optimization problems in high-dimensional spaces, either suffer from the curse of dimensionality as they rely on sampling, or get stuck in local minima as they rely on gradient-based optimization. We present a new graduated optimization method based on the optimization of the integral of the cost function over a region, which is incrementally shrunk towards a single point, recovering the original problem. We focus on the optimization of polynomial functions, for which the integral over simple regions (e.g. hyperboxes) can be computed efficiently. We show that this algorithm is guaranteed to converge to the global optimum in the simple case of a scalar decision variable. While this theoretical result does not extend to the multi-dimensional case, we empirically show that our approach outperforms several state-of-the-art algorithms when tested on sparse polynomial functions in dimensions up to 170 decision variables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-labbe25a, title = {Analytical Integral Global Optimization}, author = {Labbe, Sebastien and Prete, Andrea Del}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {711--722}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/labbe25a/labbe25a.pdf}, url = {https://proceedings.mlr.press/v283/labbe25a.html}, abstract = {Numerical optimization has been the workhorse powering the success of many machine learning and artificial intelligence tools over the last decade. However, current state-of-the-art algorithms for solving unconstrained non-convex optimization problems in high-dimensional spaces, either suffer from the curse of dimensionality as they rely on sampling, or get stuck in local minima as they rely on gradient-based optimization. We present a new graduated optimization method based on the optimization of the integral of the cost function over a region, which is incrementally shrunk towards a single point, recovering the original problem. We focus on the optimization of polynomial functions, for which the integral over simple regions (e.g. hyperboxes) can be computed efficiently. We show that this algorithm is guaranteed to converge to the global optimum in the simple case of a scalar decision variable. While this theoretical result does not extend to the multi-dimensional case, we empirically show that our approach outperforms several state-of-the-art algorithms when tested on sparse polynomial functions in dimensions up to 170 decision variables.} }
Endnote
%0 Conference Paper %T Analytical Integral Global Optimization %A Sebastien Labbe %A Andrea Del Prete %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-labbe25a %I PMLR %P 711--722 %U https://proceedings.mlr.press/v283/labbe25a.html %V 283 %X Numerical optimization has been the workhorse powering the success of many machine learning and artificial intelligence tools over the last decade. However, current state-of-the-art algorithms for solving unconstrained non-convex optimization problems in high-dimensional spaces, either suffer from the curse of dimensionality as they rely on sampling, or get stuck in local minima as they rely on gradient-based optimization. We present a new graduated optimization method based on the optimization of the integral of the cost function over a region, which is incrementally shrunk towards a single point, recovering the original problem. We focus on the optimization of polynomial functions, for which the integral over simple regions (e.g. hyperboxes) can be computed efficiently. We show that this algorithm is guaranteed to converge to the global optimum in the simple case of a scalar decision variable. While this theoretical result does not extend to the multi-dimensional case, we empirically show that our approach outperforms several state-of-the-art algorithms when tested on sparse polynomial functions in dimensions up to 170 decision variables.
APA
Labbe, S. & Prete, A.D.. (2025). Analytical Integral Global Optimization. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:711-722 Available from https://proceedings.mlr.press/v283/labbe25a.html.

Related Material