Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization

Geoffrey Negiar, Gideon Dresdner, Alicia Tsai, Laurent El Ghaoui, Francesco Locatello, Robert Freund, Fabian Pedregosa
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7253-7262, 2020.

Abstract

We propose a novel Stochastic Frank-Wolfe (a. k. a. conditional gradient) algorithm for constrained smooth finite-sum minimization with a generalized linear prediction/structure. This class of problems includes empirical risk minimization with sparse, low-rank, or other structured constraints. The proposed method is simple to implement, does not require step-size tuning, and has a constant per-iteration cost that is independent of the dataset size. Furthermore, as a byproduct of the method we obtain a stochastic estimator of the Frank-Wolfe gap that can be used as a stopping criterion. Depending on the setting, the proposed method matches or improves on the best computational guarantees for Stochastic Frank-Wolfe algorithms. Benchmarks on several datasets highlight different regimes in which the proposed method exhibits a faster empirical convergence than related methods. Finally, we provide an implementation of all considered methods in an open-source package.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-negiar20a, title = {Stochastic Frank-{W}olfe for Constrained Finite-Sum Minimization}, author = {Negiar, Geoffrey and Dresdner, Gideon and Tsai, Alicia and Ghaoui, Laurent El and Locatello, Francesco and Freund, Robert and Pedregosa, Fabian}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7253--7262}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/negiar20a/negiar20a.pdf}, url = {https://proceedings.mlr.press/v119/negiar20a.html}, abstract = {We propose a novel Stochastic Frank-Wolfe (a. k. a. conditional gradient) algorithm for constrained smooth finite-sum minimization with a generalized linear prediction/structure. This class of problems includes empirical risk minimization with sparse, low-rank, or other structured constraints. The proposed method is simple to implement, does not require step-size tuning, and has a constant per-iteration cost that is independent of the dataset size. Furthermore, as a byproduct of the method we obtain a stochastic estimator of the Frank-Wolfe gap that can be used as a stopping criterion. Depending on the setting, the proposed method matches or improves on the best computational guarantees for Stochastic Frank-Wolfe algorithms. Benchmarks on several datasets highlight different regimes in which the proposed method exhibits a faster empirical convergence than related methods. Finally, we provide an implementation of all considered methods in an open-source package.} }
Endnote
%0 Conference Paper %T Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization %A Geoffrey Negiar %A Gideon Dresdner %A Alicia Tsai %A Laurent El Ghaoui %A Francesco Locatello %A Robert Freund %A Fabian Pedregosa %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-negiar20a %I PMLR %P 7253--7262 %U https://proceedings.mlr.press/v119/negiar20a.html %V 119 %X We propose a novel Stochastic Frank-Wolfe (a. k. a. conditional gradient) algorithm for constrained smooth finite-sum minimization with a generalized linear prediction/structure. This class of problems includes empirical risk minimization with sparse, low-rank, or other structured constraints. The proposed method is simple to implement, does not require step-size tuning, and has a constant per-iteration cost that is independent of the dataset size. Furthermore, as a byproduct of the method we obtain a stochastic estimator of the Frank-Wolfe gap that can be used as a stopping criterion. Depending on the setting, the proposed method matches or improves on the best computational guarantees for Stochastic Frank-Wolfe algorithms. Benchmarks on several datasets highlight different regimes in which the proposed method exhibits a faster empirical convergence than related methods. Finally, we provide an implementation of all considered methods in an open-source package.
APA
Negiar, G., Dresdner, G., Tsai, A., Ghaoui, L.E., Locatello, F., Freund, R. & Pedregosa, F.. (2020). Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7253-7262 Available from https://proceedings.mlr.press/v119/negiar20a.html.

Related Material