Information Theoretic Guarantees for Empirical Risk Minimization with Applications to Model Selection and LargeScale Optimization
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:149158, 2018.
Abstract
In this paper, we derive bounds on the mutual information of the empirical risk minimization (ERM) procedure for both 01 and stronglyconvex loss classes. We prove that under the Axiom of Choice, the existence of an ERM learning rule with a vanishing mutual information is equivalent to the assertion that the loss class has a finite VC dimension, thus bridging information theory with statistical learning theory. Similarly, an asymptotic bound on the mutual information is established for stronglyconvex loss classes in terms of the number of model parameters. The latter result rests on a central limit theorem (CLT) that we derive in this paper. In addition, we use our results to analyze the excess risk in stochastic convex optimization and unify previous works. Finally, we present two important applications. First, we show that the ERM of stronglyconvex loss classes can be trivially scaled to big data using a naive parallelization algorithm with provable guarantees. Second, we propose a simple information criterion for model selection and demonstrate experimentally that it outperforms the popular Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC).
Related Material


