Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements

Emmanouil Vasileios Vlatakis-Gkaragkounis, Angeliki Giannou, Yudong Chen, Qiaomin Xie
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4123-4131, 2024.

Abstract

For min-max optimization and variational inequalities problems (VIPs), Stochastic Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) have emerged as preeminent algorithms. Constant step-size versions of SEG/SGDA have gained popularity due to several appealing benefits, but their convergence behaviors are complicated even in rudimentary bilinear models. Our work elucidates the probabilistic behavior of these algorithms and their projected variants, for a wide range of monotone and non-monotone VIPs with potentially biased stochastic oracles. By recasting them as time-homogeneous Markov Chains, we establish geometric convergence to a unique invariant distribution and aymptotic normality. Specializing to min-max optimization, we characterize the relationship between the step-size and the induced bias with respect to the global solution, which in turns allows for bias refinement via the Richardson-Romberg scheme. Our theoretical analysis is corroborated by numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-vasileios-vlatakis-gkaragkounis24a, title = { Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements }, author = {Vasileios Vlatakis-Gkaragkounis, Emmanouil and Giannou, Angeliki and Chen, Yudong and Xie, Qiaomin}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4123--4131}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/vasileios-vlatakis-gkaragkounis24a/vasileios-vlatakis-gkaragkounis24a.pdf}, url = {https://proceedings.mlr.press/v238/vasileios-vlatakis-gkaragkounis24a.html}, abstract = { For min-max optimization and variational inequalities problems (VIPs), Stochastic Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) have emerged as preeminent algorithms. Constant step-size versions of SEG/SGDA have gained popularity due to several appealing benefits, but their convergence behaviors are complicated even in rudimentary bilinear models. Our work elucidates the probabilistic behavior of these algorithms and their projected variants, for a wide range of monotone and non-monotone VIPs with potentially biased stochastic oracles. By recasting them as time-homogeneous Markov Chains, we establish geometric convergence to a unique invariant distribution and aymptotic normality. Specializing to min-max optimization, we characterize the relationship between the step-size and the induced bias with respect to the global solution, which in turns allows for bias refinement via the Richardson-Romberg scheme. Our theoretical analysis is corroborated by numerical experiments. } }
Endnote
%0 Conference Paper %T Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements %A Emmanouil Vasileios Vlatakis-Gkaragkounis %A Angeliki Giannou %A Yudong Chen %A Qiaomin Xie %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-vasileios-vlatakis-gkaragkounis24a %I PMLR %P 4123--4131 %U https://proceedings.mlr.press/v238/vasileios-vlatakis-gkaragkounis24a.html %V 238 %X For min-max optimization and variational inequalities problems (VIPs), Stochastic Extragradient (SEG) and Stochastic Gradient Descent Ascent (SGDA) have emerged as preeminent algorithms. Constant step-size versions of SEG/SGDA have gained popularity due to several appealing benefits, but their convergence behaviors are complicated even in rudimentary bilinear models. Our work elucidates the probabilistic behavior of these algorithms and their projected variants, for a wide range of monotone and non-monotone VIPs with potentially biased stochastic oracles. By recasting them as time-homogeneous Markov Chains, we establish geometric convergence to a unique invariant distribution and aymptotic normality. Specializing to min-max optimization, we characterize the relationship between the step-size and the induced bias with respect to the global solution, which in turns allows for bias refinement via the Richardson-Romberg scheme. Our theoretical analysis is corroborated by numerical experiments.
APA
Vasileios Vlatakis-Gkaragkounis, E., Giannou, A., Chen, Y. & Xie, Q.. (2024). Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4123-4131 Available from https://proceedings.mlr.press/v238/vasileios-vlatakis-gkaragkounis24a.html.

Related Material