Anderson Acceleration of Proximal Gradient Methods

Vien Mai, Mikael Johansson
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6620-6629, 2020.

Abstract

Anderson acceleration is a well-established and simple technique for speeding up fixed-point computations with countless applications. This work introduces novel methods for adapting Anderson acceleration to proximal gradient algorithms. Under some technical conditions, we extend existing local convergence results of Anderson acceleration for smooth fixed-point mappings to the proposed non-smooth setting. We also prove analytically that it is in general, impossible to guarantee global convergence of native Anderson acceleration. We therefore propose a simple scheme for stabilization that combines the global worst-case guarantees of proximal gradient methods with the local adaptation and practical speed-up of Anderson acceleration. Finally, we provide the first applications of Anderson acceleration to non-Euclidean geometry.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-mai20a, title = {Anderson Acceleration of Proximal Gradient Methods}, author = {Mai, Vien and Johansson, Mikael}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6620--6629}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/mai20a/mai20a.pdf}, url = {http://proceedings.mlr.press/v119/mai20a.html}, abstract = {Anderson acceleration is a well-established and simple technique for speeding up fixed-point computations with countless applications. This work introduces novel methods for adapting Anderson acceleration to proximal gradient algorithms. Under some technical conditions, we extend existing local convergence results of Anderson acceleration for smooth fixed-point mappings to the proposed non-smooth setting. We also prove analytically that it is in general, impossible to guarantee global convergence of native Anderson acceleration. We therefore propose a simple scheme for stabilization that combines the global worst-case guarantees of proximal gradient methods with the local adaptation and practical speed-up of Anderson acceleration. Finally, we provide the first applications of Anderson acceleration to non-Euclidean geometry.} }
Endnote
%0 Conference Paper %T Anderson Acceleration of Proximal Gradient Methods %A Vien Mai %A Mikael Johansson %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-mai20a %I PMLR %P 6620--6629 %U http://proceedings.mlr.press/v119/mai20a.html %V 119 %X Anderson acceleration is a well-established and simple technique for speeding up fixed-point computations with countless applications. This work introduces novel methods for adapting Anderson acceleration to proximal gradient algorithms. Under some technical conditions, we extend existing local convergence results of Anderson acceleration for smooth fixed-point mappings to the proposed non-smooth setting. We also prove analytically that it is in general, impossible to guarantee global convergence of native Anderson acceleration. We therefore propose a simple scheme for stabilization that combines the global worst-case guarantees of proximal gradient methods with the local adaptation and practical speed-up of Anderson acceleration. Finally, we provide the first applications of Anderson acceleration to non-Euclidean geometry.
APA
Mai, V. & Johansson, M.. (2020). Anderson Acceleration of Proximal Gradient Methods. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6620-6629 Available from http://proceedings.mlr.press/v119/mai20a.html.

Related Material