Reasoning About Generalization via Conditional Mutual Information

Thomas Steinke, Lydia Zakynthinou
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:3437-3452, 2020.

Abstract

We provide an information-theoretic framework for studying the generalization properties of machine learning algorithms. Our framework ties together existing approaches, including uniform convergence bounds and recent methods for adaptive data analysis. Specifically, we use Conditional Mutual Information (CMI) to quantify how well the input (i.e., the training data) can be recognized given the output (i.e., the trained model) of the learning algorithm. We show that bounds on CMI can be obtained from VC dimension, compression schemes, differential privacy, and other methods. We then show that bounded CMI implies various forms of generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-steinke20a, title = {{R}easoning {A}bout {G}eneralization via {C}onditional {M}utual {I}nformation}, author = {Steinke, Thomas and Zakynthinou, Lydia}, pages = {3437--3452}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/steinke20a/steinke20a.pdf}, url = {http://proceedings.mlr.press/v125/steinke20a.html}, abstract = { We provide an information-theoretic framework for studying the generalization properties of machine learning algorithms. Our framework ties together existing approaches, including uniform convergence bounds and recent methods for adaptive data analysis. Specifically, we use Conditional Mutual Information (CMI) to quantify how well the input (i.e., the training data) can be recognized given the output (i.e., the trained model) of the learning algorithm. We show that bounds on CMI can be obtained from VC dimension, compression schemes, differential privacy, and other methods. We then show that bounded CMI implies various forms of generalization.} }
Endnote
%0 Conference Paper %T Reasoning About Generalization via Conditional Mutual Information %A Thomas Steinke %A Lydia Zakynthinou %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-steinke20a %I PMLR %J Proceedings of Machine Learning Research %P 3437--3452 %U http://proceedings.mlr.press %V 125 %W PMLR %X We provide an information-theoretic framework for studying the generalization properties of machine learning algorithms. Our framework ties together existing approaches, including uniform convergence bounds and recent methods for adaptive data analysis. Specifically, we use Conditional Mutual Information (CMI) to quantify how well the input (i.e., the training data) can be recognized given the output (i.e., the trained model) of the learning algorithm. We show that bounds on CMI can be obtained from VC dimension, compression schemes, differential privacy, and other methods. We then show that bounded CMI implies various forms of generalization.
APA
Steinke, T. & Zakynthinou, L.. (2020). Reasoning About Generalization via Conditional Mutual Information. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:3437-3452

Related Material