Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always

Ioannis Panageas, Georgios Piliouras, Xiao Wang
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4961-4969, 2019.

Abstract

Non-concave maximization has been the subject of much recent study in the optimization and machine learning communities, specifically in deep learning. Recent papers ([Ge et al. 2015, Lee et al 2017] and references therein) indicate that first order methods work well and avoid saddles points. Results as in [Lee \etal 2017], however, are limited to the unconstrained case or for cases where the critical points are in the interior of the feasibility set, which fail to capture some of the most interesting applications. In this paper we focus on constrained non-concave maximization. We analyze a variant of a well-established algorithm in machine learning called Multiplicative Weights Update (MWU) for the maximization problem $\max_{\mathbf{x} \in D} P(\mathbf{x})$, where $P$ is non-concave, twice continuously differentiable and $D$ is a product of simplices. We show that MWU converges almost always for small enough stepsizes to critical points that satisfy the second order KKT conditions, by combining techniques from dynamical systems as well as taking advantage of a recent connection between Baum Eagon inequality and MWU [Palaiopanos et al 2017].

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-panageas19a, title = {Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always}, author = {Panageas, Ioannis and Piliouras, Georgios and Wang, Xiao}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4961--4969}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/panageas19a/panageas19a.pdf}, url = {https://proceedings.mlr.press/v97/panageas19a.html}, abstract = {Non-concave maximization has been the subject of much recent study in the optimization and machine learning communities, specifically in deep learning. Recent papers ([Ge et al. 2015, Lee et al 2017] and references therein) indicate that first order methods work well and avoid saddles points. Results as in [Lee \etal 2017], however, are limited to the unconstrained case or for cases where the critical points are in the interior of the feasibility set, which fail to capture some of the most interesting applications. In this paper we focus on constrained non-concave maximization. We analyze a variant of a well-established algorithm in machine learning called Multiplicative Weights Update (MWU) for the maximization problem $\max_{\mathbf{x} \in D} P(\mathbf{x})$, where $P$ is non-concave, twice continuously differentiable and $D$ is a product of simplices. We show that MWU converges almost always for small enough stepsizes to critical points that satisfy the second order KKT conditions, by combining techniques from dynamical systems as well as taking advantage of a recent connection between Baum Eagon inequality and MWU [Palaiopanos et al 2017].} }
Endnote
%0 Conference Paper %T Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always %A Ioannis Panageas %A Georgios Piliouras %A Xiao Wang %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-panageas19a %I PMLR %P 4961--4969 %U https://proceedings.mlr.press/v97/panageas19a.html %V 97 %X Non-concave maximization has been the subject of much recent study in the optimization and machine learning communities, specifically in deep learning. Recent papers ([Ge et al. 2015, Lee et al 2017] and references therein) indicate that first order methods work well and avoid saddles points. Results as in [Lee \etal 2017], however, are limited to the unconstrained case or for cases where the critical points are in the interior of the feasibility set, which fail to capture some of the most interesting applications. In this paper we focus on constrained non-concave maximization. We analyze a variant of a well-established algorithm in machine learning called Multiplicative Weights Update (MWU) for the maximization problem $\max_{\mathbf{x} \in D} P(\mathbf{x})$, where $P$ is non-concave, twice continuously differentiable and $D$ is a product of simplices. We show that MWU converges almost always for small enough stepsizes to critical points that satisfy the second order KKT conditions, by combining techniques from dynamical systems as well as taking advantage of a recent connection between Baum Eagon inequality and MWU [Palaiopanos et al 2017].
APA
Panageas, I., Piliouras, G. & Wang, X.. (2019). Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4961-4969 Available from https://proceedings.mlr.press/v97/panageas19a.html.

Related Material