Learning to Sample in Stochastic Optimization

Sijia Zhou, Yunwen Lei, Ata Kaban
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:5099-5115, 2025.

Abstract

We consider a PAC-Bayes analysis of stochastic optimization algorithms, and devise a new SGDA algorithm inspired from our bounds. Our algorithm learns a data-dependent sampling scheme along with model parameters, which may be seen as assigning a probability to each training point. We demonstrate that learning the sampling scheme increases robustness against misleading training points, as our algorithm learns to avoid bad examples during training. We conduct experiments in both standard and adversarial learning problems on several benchmark datasets, and demonstrate various applications including interpretability upon visual inspection, and robustness to the ill effects of bad training points. We also extend our analysis to pairwise SGD to demonstrate the generalizability of our methodology.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-zhou25b, title = {Learning to Sample in Stochastic Optimization}, author = {Zhou, Sijia and Lei, Yunwen and Kaban, Ata}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {5099--5115}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/zhou25b/zhou25b.pdf}, url = {https://proceedings.mlr.press/v286/zhou25b.html}, abstract = {We consider a PAC-Bayes analysis of stochastic optimization algorithms, and devise a new SGDA algorithm inspired from our bounds. Our algorithm learns a data-dependent sampling scheme along with model parameters, which may be seen as assigning a probability to each training point. We demonstrate that learning the sampling scheme increases robustness against misleading training points, as our algorithm learns to avoid bad examples during training. We conduct experiments in both standard and adversarial learning problems on several benchmark datasets, and demonstrate various applications including interpretability upon visual inspection, and robustness to the ill effects of bad training points. We also extend our analysis to pairwise SGD to demonstrate the generalizability of our methodology.} }
Endnote
%0 Conference Paper %T Learning to Sample in Stochastic Optimization %A Sijia Zhou %A Yunwen Lei %A Ata Kaban %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-zhou25b %I PMLR %P 5099--5115 %U https://proceedings.mlr.press/v286/zhou25b.html %V 286 %X We consider a PAC-Bayes analysis of stochastic optimization algorithms, and devise a new SGDA algorithm inspired from our bounds. Our algorithm learns a data-dependent sampling scheme along with model parameters, which may be seen as assigning a probability to each training point. We demonstrate that learning the sampling scheme increases robustness against misleading training points, as our algorithm learns to avoid bad examples during training. We conduct experiments in both standard and adversarial learning problems on several benchmark datasets, and demonstrate various applications including interpretability upon visual inspection, and robustness to the ill effects of bad training points. We also extend our analysis to pairwise SGD to demonstrate the generalizability of our methodology.
APA
Zhou, S., Lei, Y. & Kaban, A.. (2025). Learning to Sample in Stochastic Optimization. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:5099-5115 Available from https://proceedings.mlr.press/v286/zhou25b.html.

Related Material