[edit]
SurCo: Learning Linear SURrogates for COmbinatorial Nonlinear Optimization Problems
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:10034-10052, 2023.
Abstract
Optimization problems with nonlinear cost functions and combinatorial constraints appear in many real-world applications but remain challenging to solve efficiently compared to their linear counterparts. To bridge this gap, we propose \emph{\texttt{SurCo}} that learns linear Sur_rogate costs which can be used in existing Co_mbinatorial solvers to output good solutions to the original nonlinear combinatorial optimization problem. The surrogate costs are learned end-to-end with nonlinear loss by differentiating through the linear surrogate solver, combining the flexibility of gradient-based methods with the structure of linear combinatorial optimization. We propose three SurCo variants: SurCo−zero for individual nonlinear problems, SurCo−prior for problem distributions, and SurCo−hybrid to combine both distribution and problem-specific information. We give theoretical intuition motivating SurCo, and evaluate it empirically. Experiments show that SurCo finds better solutions faster than state-of-the-art and domain expert approaches in real-world optimization problems such as embedding table sharding, inverse photonic design, and nonlinear route planning.