TwoPlayer Games for Efficient NonConvex Constrained Optimization
[edit]
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:300332, 2019.
Abstract
In recent years, constrained optimization has become increasingly relevant to the machine learning community, with applications including NeymanPearson classification, robust optimization, and fair machine learning. A natural approach to constrained optimization is to optimize the Lagrangian, but this is not guaranteed to work in the nonconvex setting, and, if using a firstorder method, cannot cope with nondifferentiable constraints (e.g. constraints on rates or proportions).
The Lagrangian can be interpreted as a twoplayer game played between a player who seeks to optimize over the model parameters, and a player who wishes to maximize over the Lagrange multipliers. We propose a nonzerosum variant of the Lagrangian formulation that can cope with nondifferentiable—even discontinuous—constraints, which we call the “proxyLagrangian”. The first player minimizes external regret in terms of easytooptimize “proxy constraints”, while the second player enforces the \emph{original} constraints by minimizing swap regret.
For this new formulation, as for the Lagrangian in the nonconvex setting, the result is a stochastic classifier. For both the proxyLagrangian and Lagrangian formulations, however, we prove that this classifier, instead of having unbounded size, can be taken to be a distribution over no more than $m+1$ models (where $m$ is the number of constraints). This is a significant improvement in practical terms.
Related Material


