Stochastic Variance Reduction for Variational Inequality Methods

Ahmet Alacaoglu, Yura Malitsky
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:778-816, 2022.

Abstract

We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in exactly the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-alacaoglu22a, title = {Stochastic Variance Reduction for Variational Inequality Methods}, author = {Alacaoglu, Ahmet and Malitsky, Yura}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {778--816}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/alacaoglu22a/alacaoglu22a.pdf}, url = {https://proceedings.mlr.press/v178/alacaoglu22a.html}, abstract = {We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in exactly the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.} }
Endnote
%0 Conference Paper %T Stochastic Variance Reduction for Variational Inequality Methods %A Ahmet Alacaoglu %A Yura Malitsky %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-alacaoglu22a %I PMLR %P 778--816 %U https://proceedings.mlr.press/v178/alacaoglu22a.html %V 178 %X We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in exactly the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.
APA
Alacaoglu, A. & Malitsky, Y.. (2022). Stochastic Variance Reduction for Variational Inequality Methods. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:778-816 Available from https://proceedings.mlr.press/v178/alacaoglu22a.html.

Related Material