Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes

Daniil Vankov, Angelia Nedich, Lalitha Sankar
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:49137-49170, 2024.

Abstract

Variational Inequality (VI) problems have attracted great interest in the machine learning (ML) community due to their application in adversarial and multi-agent training. Despite its relevance in ML, the oft-used strong-monotonicity and Lipschitz continuity assumptions on VI problems are restrictive and do not hold in many machine learning problems. To address this, we relax smoothness and monotonicity assumptions and study structured non-monotone generalized smoothness. The key idea of our results is in adaptive stepsizes. We prove the first-known convergence results for solving generalized smooth VIs for the three popular methods, namely, projection, Korpelevich, and Popov methods. Our convergence rate results for generalized smooth VIs match or improve existing results on smooth VIs. We present numerical experiments that support our theoretical guarantees and highlight the efficiency of proposed adaptive stepsizes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-vankov24a, title = {Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes}, author = {Vankov, Daniil and Nedich, Angelia and Sankar, Lalitha}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {49137--49170}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/vankov24a/vankov24a.pdf}, url = {https://proceedings.mlr.press/v235/vankov24a.html}, abstract = {Variational Inequality (VI) problems have attracted great interest in the machine learning (ML) community due to their application in adversarial and multi-agent training. Despite its relevance in ML, the oft-used strong-monotonicity and Lipschitz continuity assumptions on VI problems are restrictive and do not hold in many machine learning problems. To address this, we relax smoothness and monotonicity assumptions and study structured non-monotone generalized smoothness. The key idea of our results is in adaptive stepsizes. We prove the first-known convergence results for solving generalized smooth VIs for the three popular methods, namely, projection, Korpelevich, and Popov methods. Our convergence rate results for generalized smooth VIs match or improve existing results on smooth VIs. We present numerical experiments that support our theoretical guarantees and highlight the efficiency of proposed adaptive stepsizes.} }
Endnote
%0 Conference Paper %T Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes %A Daniil Vankov %A Angelia Nedich %A Lalitha Sankar %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-vankov24a %I PMLR %P 49137--49170 %U https://proceedings.mlr.press/v235/vankov24a.html %V 235 %X Variational Inequality (VI) problems have attracted great interest in the machine learning (ML) community due to their application in adversarial and multi-agent training. Despite its relevance in ML, the oft-used strong-monotonicity and Lipschitz continuity assumptions on VI problems are restrictive and do not hold in many machine learning problems. To address this, we relax smoothness and monotonicity assumptions and study structured non-monotone generalized smoothness. The key idea of our results is in adaptive stepsizes. We prove the first-known convergence results for solving generalized smooth VIs for the three popular methods, namely, projection, Korpelevich, and Popov methods. Our convergence rate results for generalized smooth VIs match or improve existing results on smooth VIs. We present numerical experiments that support our theoretical guarantees and highlight the efficiency of proposed adaptive stepsizes.
APA
Vankov, D., Nedich, A. & Sankar, L.. (2024). Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:49137-49170 Available from https://proceedings.mlr.press/v235/vankov24a.html.

Related Material