Robustness Implies Generalization via Data-Dependent Generalization Bounds

Kenji Kawaguchi, Zhun Deng, Kyle Luh, Jiaoyang Huang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10866-10894, 2022.

Abstract

This paper proves that robustness implies generalization via data-dependent generalization bounds. As a result, robustness and generalization are shown to be connected closely in a data-dependent manner. Our bounds improve previous bounds in two directions, to solve an open problem that has seen little development since 2010. The first is to reduce the dependence on the covering number. The second is to remove the dependence on the hypothesis space. We present several examples, including ones for lasso and deep learning, in which our bounds are provably preferable. The experiments on real-world data and theoretical models demonstrate near-exponential improvements in various situations. To achieve these improvements, we do not require additional assumptions on the unknown distribution; instead, we only incorporate an observable and computable property of the training samples. A key technical innovation is an improved concentration bound for multinomial random variables that is of independent interest beyond robustness and generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kawaguchi22a, title = {Robustness Implies Generalization via Data-Dependent Generalization Bounds}, author = {Kawaguchi, Kenji and Deng, Zhun and Luh, Kyle and Huang, Jiaoyang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10866--10894}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kawaguchi22a/kawaguchi22a.pdf}, url = {https://proceedings.mlr.press/v162/kawaguchi22a.html}, abstract = {This paper proves that robustness implies generalization via data-dependent generalization bounds. As a result, robustness and generalization are shown to be connected closely in a data-dependent manner. Our bounds improve previous bounds in two directions, to solve an open problem that has seen little development since 2010. The first is to reduce the dependence on the covering number. The second is to remove the dependence on the hypothesis space. We present several examples, including ones for lasso and deep learning, in which our bounds are provably preferable. The experiments on real-world data and theoretical models demonstrate near-exponential improvements in various situations. To achieve these improvements, we do not require additional assumptions on the unknown distribution; instead, we only incorporate an observable and computable property of the training samples. A key technical innovation is an improved concentration bound for multinomial random variables that is of independent interest beyond robustness and generalization.} }
Endnote
%0 Conference Paper %T Robustness Implies Generalization via Data-Dependent Generalization Bounds %A Kenji Kawaguchi %A Zhun Deng %A Kyle Luh %A Jiaoyang Huang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kawaguchi22a %I PMLR %P 10866--10894 %U https://proceedings.mlr.press/v162/kawaguchi22a.html %V 162 %X This paper proves that robustness implies generalization via data-dependent generalization bounds. As a result, robustness and generalization are shown to be connected closely in a data-dependent manner. Our bounds improve previous bounds in two directions, to solve an open problem that has seen little development since 2010. The first is to reduce the dependence on the covering number. The second is to remove the dependence on the hypothesis space. We present several examples, including ones for lasso and deep learning, in which our bounds are provably preferable. The experiments on real-world data and theoretical models demonstrate near-exponential improvements in various situations. To achieve these improvements, we do not require additional assumptions on the unknown distribution; instead, we only incorporate an observable and computable property of the training samples. A key technical innovation is an improved concentration bound for multinomial random variables that is of independent interest beyond robustness and generalization.
APA
Kawaguchi, K., Deng, Z., Luh, K. & Huang, J.. (2022). Robustness Implies Generalization via Data-Dependent Generalization Bounds. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10866-10894 Available from https://proceedings.mlr.press/v162/kawaguchi22a.html.

Related Material