Algorithmic Stability and Hypothesis Complexity

Tongliang Liu, Gábor Lugosi, Gergely Neu, Dacheng Tao
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2159-2167, 2017.

Abstract

We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-liu17c, title = {Algorithmic Stability and Hypothesis Complexity}, author = {Tongliang Liu and G{\'a}bor Lugosi and Gergely Neu and Dacheng Tao}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2159--2167}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/liu17c/liu17c.pdf}, url = {https://proceedings.mlr.press/v70/liu17c.html}, abstract = {We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.} }
Endnote
%0 Conference Paper %T Algorithmic Stability and Hypothesis Complexity %A Tongliang Liu %A Gábor Lugosi %A Gergely Neu %A Dacheng Tao %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-liu17c %I PMLR %P 2159--2167 %U https://proceedings.mlr.press/v70/liu17c.html %V 70 %X We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. The main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent.
APA
Liu, T., Lugosi, G., Neu, G. & Tao, D.. (2017). Algorithmic Stability and Hypothesis Complexity. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2159-2167 Available from https://proceedings.mlr.press/v70/liu17c.html.

Related Material