Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming

Chunlin Sun, Shang Liu, Xiaocheng Li
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:32886-32912, 2023.

Abstract

In this paper, we study the predict-then-optimize problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as predictive analytics or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called maximum optimality margin which designs the machine learning loss function by the optimality condition of the downstream optimization. The max-margin formulation enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse linear programming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-sun23e, title = {Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming}, author = {Sun, Chunlin and Liu, Shang and Li, Xiaocheng}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {32886--32912}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/sun23e/sun23e.pdf}, url = {https://proceedings.mlr.press/v202/sun23e.html}, abstract = {In this paper, we study the predict-then-optimize problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as predictive analytics or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called maximum optimality margin which designs the machine learning loss function by the optimality condition of the downstream optimization. The max-margin formulation enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse linear programming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.} }
Endnote
%0 Conference Paper %T Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming %A Chunlin Sun %A Shang Liu %A Xiaocheng Li %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-sun23e %I PMLR %P 32886--32912 %U https://proceedings.mlr.press/v202/sun23e.html %V 202 %X In this paper, we study the predict-then-optimize problem where the output of a machine learning prediction task is used as the input of some downstream optimization problem, say, the objective coefficient vector of a linear program. The problem is also known as predictive analytics or contextual linear programming. The existing approaches largely suffer from either (i) optimization intractability (a non-convex objective function)/statistical inefficiency (a suboptimal generalization bound) or (ii) requiring strong condition(s) such as no constraint or loss calibration. We develop a new approach to the problem called maximum optimality margin which designs the machine learning loss function by the optimality condition of the downstream optimization. The max-margin formulation enjoys both computational efficiency and good theoretical properties for the learning procedure. More importantly, our new approach only needs the observations of the optimal solution in the training data rather than the objective function, which makes it a new and natural approach to the inverse linear programming problem under both contextual and context-free settings; we also analyze the proposed method under both offline and online settings, and demonstrate its performance using numerical experiments.
APA
Sun, C., Liu, S. & Li, X.. (2023). Maximum Optimality Margin: A Unified Approach for Contextual Linear Programming and Inverse Linear Programming. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:32886-32912 Available from https://proceedings.mlr.press/v202/sun23e.html.

Related Material