In Defense of Uniform Convergence: Generalization via Derandomization with an Application to Interpolating Predictors

Jeffrey Negrea, Gintare Karolina Dziugaite, Daniel Roy
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:7263-7272, 2020.

Abstract

We propose to study the generalization error of a learned predictor in terms of that of a surrogate (potentially randomized) predictor that is coupled to $\hh$ and designed to trade empirical risk for control of generalization error. In the case where the learned predictor interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We also show that replacing the learned predictor by its conditional distribution with respect to an arbitrary $\sigma$-field is a convenient way to derandomize. We study two examples, inspired by the work of Nagarajan and Kolter (2019) and Bartlett et al. (2020), where the learned predictor interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of the learned predictor in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-negrea20a, title = {In Defense of Uniform Convergence: Generalization via Derandomization with an Application to Interpolating Predictors}, author = {Negrea, Jeffrey and Dziugaite, Gintare Karolina and Roy, Daniel}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {7263--7272}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/negrea20a/negrea20a.pdf}, url = {http://proceedings.mlr.press/v119/negrea20a.html}, abstract = {We propose to study the generalization error of a learned predictor in terms of that of a surrogate (potentially randomized) predictor that is coupled to $\hh$ and designed to trade empirical risk for control of generalization error. In the case where the learned predictor interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We also show that replacing the learned predictor by its conditional distribution with respect to an arbitrary $\sigma$-field is a convenient way to derandomize. We study two examples, inspired by the work of Nagarajan and Kolter (2019) and Bartlett et al. (2020), where the learned predictor interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of the learned predictor in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.} }
Endnote
%0 Conference Paper %T In Defense of Uniform Convergence: Generalization via Derandomization with an Application to Interpolating Predictors %A Jeffrey Negrea %A Gintare Karolina Dziugaite %A Daniel Roy %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-negrea20a %I PMLR %P 7263--7272 %U http://proceedings.mlr.press/v119/negrea20a.html %V 119 %X We propose to study the generalization error of a learned predictor in terms of that of a surrogate (potentially randomized) predictor that is coupled to $\hh$ and designed to trade empirical risk for control of generalization error. In the case where the learned predictor interpolates the data, it is interesting to consider theoretical surrogate classifiers that are partially derandomized or rerandomized, e.g., fit to the training data but with modified label noise. We also show that replacing the learned predictor by its conditional distribution with respect to an arbitrary $\sigma$-field is a convenient way to derandomize. We study two examples, inspired by the work of Nagarajan and Kolter (2019) and Bartlett et al. (2020), where the learned predictor interpolates the training data with high probability, has small risk, and, yet, does not belong to a nonrandom class with a tight uniform bound on two-sided generalization error. At the same time, we bound the risk of the learned predictor in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.
APA
Negrea, J., Dziugaite, G.K. & Roy, D.. (2020). In Defense of Uniform Convergence: Generalization via Derandomization with an Application to Interpolating Predictors. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:7263-7272 Available from http://proceedings.mlr.press/v119/negrea20a.html.

Related Material