Modeling simple structures and geometry for better stochastic optimization algorithms
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:2425-2434, 2019.
We develop model-based methods for stochastic optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods. For appropriately accurate models, the methods enjoy stronger convergence and robustness guarantees than classical approaches and typically add little to no computational overhead over stochastic subgradient methods. For example, we show that methods using the improved models converge with probability 1; these methods are also adaptive to a natural class of what we term easy optimization problems, achieving linear convergence under appropriate strong growth conditions on the objective. Our experimental investigation shows the advantages of more accurate modeling over standard subgradient methods across many smooth and non-smooth, convex and non-convex optimization problems.