Adaptive Learning with Robust Generalization Guarantees

Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu
29th Annual Conference on Learning Theory, PMLR 49:772-814, 2016.

Abstract

The traditional notion of \emphgeneralization—i.e., learning a hypothesis whose empirical error is close to its true error—is surprisingly brittle. As has recently been noted [Dwork et al. 2015], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization—increasing in strength—that are \emphrobust to postprocessing and amenable to adaptive composition, and examine the relationships between them. We call the weakest such notion \emphRobust Generalization. A second, intermediate, notion is the stability guarantee known as \emphdifferential privacy. The strongest guarantee we consider we call \emphPerfect Generalization. We prove that every hypothesis class that is PAC learnable is also PAC learnable in a robustly generalizing fashion, with almost the same sample complexity. It was previously known that differentially private algorithms satisfy robust generalization. In this paper, we show that robust generalization is a strictly weaker concept, and that there is a learning task that can be carried out subject to robust generalization guarantees, yet cannot be carried out subject to differential privacy. We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-cummings16, title = {Adaptive Learning with Robust Generalization Guarantees}, author = {Cummings, Rachel and Ligett, Katrina and Nissim, Kobbi and Roth, Aaron and Wu, Zhiwei Steven}, booktitle = {29th Annual Conference on Learning Theory}, pages = {772--814}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/cummings16.pdf}, url = {https://proceedings.mlr.press/v49/cummings16.html}, abstract = {The traditional notion of \emphgeneralization—i.e., learning a hypothesis whose empirical error is close to its true error—is surprisingly brittle. As has recently been noted [Dwork et al. 2015], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization—increasing in strength—that are \emphrobust to postprocessing and amenable to adaptive composition, and examine the relationships between them. We call the weakest such notion \emphRobust Generalization. A second, intermediate, notion is the stability guarantee known as \emphdifferential privacy. The strongest guarantee we consider we call \emphPerfect Generalization. We prove that every hypothesis class that is PAC learnable is also PAC learnable in a robustly generalizing fashion, with almost the same sample complexity. It was previously known that differentially private algorithms satisfy robust generalization. In this paper, we show that robust generalization is a strictly weaker concept, and that there is a learning task that can be carried out subject to robust generalization guarantees, yet cannot be carried out subject to differential privacy. We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.} }
Endnote
%0 Conference Paper %T Adaptive Learning with Robust Generalization Guarantees %A Rachel Cummings %A Katrina Ligett %A Kobbi Nissim %A Aaron Roth %A Zhiwei Steven Wu %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-cummings16 %I PMLR %P 772--814 %U https://proceedings.mlr.press/v49/cummings16.html %V 49 %X The traditional notion of \emphgeneralization—i.e., learning a hypothesis whose empirical error is close to its true error—is surprisingly brittle. As has recently been noted [Dwork et al. 2015], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization—increasing in strength—that are \emphrobust to postprocessing and amenable to adaptive composition, and examine the relationships between them. We call the weakest such notion \emphRobust Generalization. A second, intermediate, notion is the stability guarantee known as \emphdifferential privacy. The strongest guarantee we consider we call \emphPerfect Generalization. We prove that every hypothesis class that is PAC learnable is also PAC learnable in a robustly generalizing fashion, with almost the same sample complexity. It was previously known that differentially private algorithms satisfy robust generalization. In this paper, we show that robust generalization is a strictly weaker concept, and that there is a learning task that can be carried out subject to robust generalization guarantees, yet cannot be carried out subject to differential privacy. We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.
RIS
TY - CPAPER TI - Adaptive Learning with Robust Generalization Guarantees AU - Rachel Cummings AU - Katrina Ligett AU - Kobbi Nissim AU - Aaron Roth AU - Zhiwei Steven Wu BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-cummings16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 772 EP - 814 L1 - http://proceedings.mlr.press/v49/cummings16.pdf UR - https://proceedings.mlr.press/v49/cummings16.html AB - The traditional notion of \emphgeneralization—i.e., learning a hypothesis whose empirical error is close to its true error—is surprisingly brittle. As has recently been noted [Dwork et al. 2015], even if several algorithms have this guarantee in isolation, the guarantee need not hold if the algorithms are composed adaptively. In this paper, we study three notions of generalization—increasing in strength—that are \emphrobust to postprocessing and amenable to adaptive composition, and examine the relationships between them. We call the weakest such notion \emphRobust Generalization. A second, intermediate, notion is the stability guarantee known as \emphdifferential privacy. The strongest guarantee we consider we call \emphPerfect Generalization. We prove that every hypothesis class that is PAC learnable is also PAC learnable in a robustly generalizing fashion, with almost the same sample complexity. It was previously known that differentially private algorithms satisfy robust generalization. In this paper, we show that robust generalization is a strictly weaker concept, and that there is a learning task that can be carried out subject to robust generalization guarantees, yet cannot be carried out subject to differential privacy. We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization. ER -
APA
Cummings, R., Ligett, K., Nissim, K., Roth, A. & Wu, Z.S.. (2016). Adaptive Learning with Robust Generalization Guarantees. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:772-814 Available from https://proceedings.mlr.press/v49/cummings16.html.

Related Material