Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization

Gergely Neu, Nneka Okolo
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:37508-37530, 2024.

Abstract

We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-neu24a, title = {Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization}, author = {Neu, Gergely and Okolo, Nneka}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {37508--37530}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/neu24a/neu24a.pdf}, url = {https://proceedings.mlr.press/v235/neu24a.html}, abstract = {We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.} }
Endnote
%0 Conference Paper %T Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization %A Gergely Neu %A Nneka Okolo %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-neu24a %I PMLR %P 37508--37530 %U https://proceedings.mlr.press/v235/neu24a.html %V 235 %X We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.
APA
Neu, G. & Okolo, N.. (2024). Dealing With Unbounded Gradients in Stochastic Saddle-point Optimization. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:37508-37530 Available from https://proceedings.mlr.press/v235/neu24a.html.

Related Material