LevelSet Methods for FiniteSum Constrained Convex Optimization
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:31123121, 2018.
Abstract
We consider the constrained optimization where the objective function and the constraints are defined as summation of finitely many loss functions. This model has applications in machine learning such as NeymanPearson classification. We consider two levelset methods to solve this class of problems, an existing inexact Newton method and a new feasible levelset method. To update the level parameter towards the optimality, both methods require an oracle that generates upper and lower bounds as well as an affineminorant of the level function. To construct the desired oracle, we reformulate the level function as the value of a saddlepoint problem using the conjugate and perspective of the loss functions. Then a stochastic variancereduced gradient method with a special Bregman divergence is proposed as the oracle for solving that saddlepoint problem. The special divergence ensures the proximal mapping in each iteration can be solved in a closed form. The total complexity of both levelset methods using the proposed oracle are analyzed.
Related Material


