Boosting Frank-Wolfe by Chasing Gradients

Cyrille Combettes, Sebastian Pokutta
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2111-2121, 2020.

Abstract

The Frank-Wolfe algorithm has become a popular first-order optimization algorithm for it is simple and projection-free, and it has been successfully applied to a variety of real-world problems. Its main drawback however lies in its convergence rate, which can be excessively slow due to naive descent directions. We propose to speed up the Frank-Wolfe algorithm by better aligning the descent direction with that of the negative gradient via a subroutine. This subroutine chases the negative gradient direction in a matching pursuit-style while still preserving the projection-free property. Although the approach is reasonably natural, it produces very significant results. We derive convergence rates $\mathcal{O}(1/t)$ to $\mathcal{O}(e^{-\omega t})$ of our method and we demonstrate its competitive advantage both per iteration and in CPU time over the state-of-the-art in a series of computational experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-combettes20a, title = {Boosting Frank-{W}olfe by Chasing Gradients}, author = {Combettes, Cyrille and Pokutta, Sebastian}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2111--2121}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/combettes20a/combettes20a.pdf}, url = {https://proceedings.mlr.press/v119/combettes20a.html}, abstract = {The Frank-Wolfe algorithm has become a popular first-order optimization algorithm for it is simple and projection-free, and it has been successfully applied to a variety of real-world problems. Its main drawback however lies in its convergence rate, which can be excessively slow due to naive descent directions. We propose to speed up the Frank-Wolfe algorithm by better aligning the descent direction with that of the negative gradient via a subroutine. This subroutine chases the negative gradient direction in a matching pursuit-style while still preserving the projection-free property. Although the approach is reasonably natural, it produces very significant results. We derive convergence rates $\mathcal{O}(1/t)$ to $\mathcal{O}(e^{-\omega t})$ of our method and we demonstrate its competitive advantage both per iteration and in CPU time over the state-of-the-art in a series of computational experiments.} }
Endnote
%0 Conference Paper %T Boosting Frank-Wolfe by Chasing Gradients %A Cyrille Combettes %A Sebastian Pokutta %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-combettes20a %I PMLR %P 2111--2121 %U https://proceedings.mlr.press/v119/combettes20a.html %V 119 %X The Frank-Wolfe algorithm has become a popular first-order optimization algorithm for it is simple and projection-free, and it has been successfully applied to a variety of real-world problems. Its main drawback however lies in its convergence rate, which can be excessively slow due to naive descent directions. We propose to speed up the Frank-Wolfe algorithm by better aligning the descent direction with that of the negative gradient via a subroutine. This subroutine chases the negative gradient direction in a matching pursuit-style while still preserving the projection-free property. Although the approach is reasonably natural, it produces very significant results. We derive convergence rates $\mathcal{O}(1/t)$ to $\mathcal{O}(e^{-\omega t})$ of our method and we demonstrate its competitive advantage both per iteration and in CPU time over the state-of-the-art in a series of computational experiments.
APA
Combettes, C. & Pokutta, S.. (2020). Boosting Frank-Wolfe by Chasing Gradients. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2111-2121 Available from https://proceedings.mlr.press/v119/combettes20a.html.

Related Material