Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method

Jiaming Liang
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-25, 2026.

Abstract

High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-liang26a, title = {Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method}, author = {Liang, Jiaming}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--25}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/liang26a/liang26a.pdf}, url = {https://proceedings.mlr.press/v313/liang26a.html}, abstract = {High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.} }
Endnote
%0 Conference Paper %T Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method %A Jiaming Liang %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-liang26a %I PMLR %P 1--25 %U https://proceedings.mlr.press/v313/liang26a.html %V 313 %X High-probability guarantees in stochastic optimization are often obtained only under strong noise assumptions such as sub-Gaussian tails. We show that such guarantees can also be achieved under the weaker assumption of bounded variance by developing a stochastic proximal point method. This method combines a proximal subproblem solver, which inherently reduces variance, with a probability booster that amplifies per-iteration reliability into high-confidence results. The analysis demonstrates convergence with low sample complexity, without restrictive noise assumptions or reliance on mini-batching.
APA
Liang, J.. (2026). Variance Reduction and Low Sample Complexity in Stochastic Optimization via Proximal Point Method. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-25 Available from https://proceedings.mlr.press/v313/liang26a.html.

Related Material