Fast generalization error bound of deep learning from a kernel perspective

[edit]

Taiji Suzuki ;
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1397-1406, 2018.

Abstract

We develop a new theoretical framework to analyze the generalization error of deep learning, and derive a new fast learning rate for two representative algorithms: empirical risk minimization and Bayesian deep learning. The series of theoretical analyses of deep learning has revealed its high expressive power and universal approximation capability. Our point of view is to deal with the ordinary finite dimensional deep neural network as a finite approximation of the infinite dimensional one. Our formulation of the infinite dimensional model naturally defines a reproducing kernel Hilbert space corresponding to each layer. The approximation error is evaluated by the degree of freedom of the reproducing kernel Hilbert space in each layer. We derive the generalization error bound of both of empirical risk minimization and Bayesian deep learning and it is shown that there appears bias-variance trade-off in terms of the number of parameters of the finite dimensional approximation. We show that the optimal width of the internal layers can be determined through the degree of freedom and derive the optimal convergence rate that is faster than $O(1/\sqrt{n})$ rate which has been shown in the existing studies.

Related Material