Algorithmic Stability Unleashed: Generalization Bounds with Unbounded Losses

Shaojie Li, Bowei Zhu, Yong Liu
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:29489-29510, 2024.

Abstract

One of the central problems of statistical learning theory is quantifying the generalization ability of learning algorithms within a probabilistic framework. Algorithmic stability is a powerful tool for deriving generalization bounds, however, it typically builds on a critical assumption that losses are bounded. In this paper, we relax this condition to unbounded loss functions with subweibull diameter. This gives new generalization bounds for algorithmic stability and also includes existing results of subgaussian and subexponential diameters as specific cases. Furthermore, we provide a refined stability analysis by developing generalization bounds which can be $\sqrt{n}$-times faster than the previous results, where $n$ is the sample size. Our main technical contribution is general concentration inequalities for subweibull random variables, which may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24cs, title = {Algorithmic Stability Unleashed: Generalization Bounds with Unbounded Losses}, author = {Li, Shaojie and Zhu, Bowei and Liu, Yong}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {29489--29510}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24cs/li24cs.pdf}, url = {https://proceedings.mlr.press/v235/li24cs.html}, abstract = {One of the central problems of statistical learning theory is quantifying the generalization ability of learning algorithms within a probabilistic framework. Algorithmic stability is a powerful tool for deriving generalization bounds, however, it typically builds on a critical assumption that losses are bounded. In this paper, we relax this condition to unbounded loss functions with subweibull diameter. This gives new generalization bounds for algorithmic stability and also includes existing results of subgaussian and subexponential diameters as specific cases. Furthermore, we provide a refined stability analysis by developing generalization bounds which can be $\sqrt{n}$-times faster than the previous results, where $n$ is the sample size. Our main technical contribution is general concentration inequalities for subweibull random variables, which may be of independent interest.} }
Endnote
%0 Conference Paper %T Algorithmic Stability Unleashed: Generalization Bounds with Unbounded Losses %A Shaojie Li %A Bowei Zhu %A Yong Liu %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24cs %I PMLR %P 29489--29510 %U https://proceedings.mlr.press/v235/li24cs.html %V 235 %X One of the central problems of statistical learning theory is quantifying the generalization ability of learning algorithms within a probabilistic framework. Algorithmic stability is a powerful tool for deriving generalization bounds, however, it typically builds on a critical assumption that losses are bounded. In this paper, we relax this condition to unbounded loss functions with subweibull diameter. This gives new generalization bounds for algorithmic stability and also includes existing results of subgaussian and subexponential diameters as specific cases. Furthermore, we provide a refined stability analysis by developing generalization bounds which can be $\sqrt{n}$-times faster than the previous results, where $n$ is the sample size. Our main technical contribution is general concentration inequalities for subweibull random variables, which may be of independent interest.
APA
Li, S., Zhu, B. & Liu, Y.. (2024). Algorithmic Stability Unleashed: Generalization Bounds with Unbounded Losses. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:29489-29510 Available from https://proceedings.mlr.press/v235/li24cs.html.

Related Material