Approximation and non-parametric estimation of ResNet-type convolutional neural networks

Kenta Oono, Taiji Suzuki
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4922-4931, 2019.

Abstract

Convolutional neural networks (CNNs) have been shown to achieve optimal approximation and estimation error rates (in minimax sense) in several function classes. However, previous analyzed optimal CNNs are unrealistically wide and difficult to obtain via optimization due to sparse constraints in important function classes, including the Hölder class. We show a ResNet-type CNN can attain the minimax optimal error rates in these classes in more plausible situations – it can be dense, and its width, channel size, and filter size are constant with respect to sample size. The key idea is that we can replicate the learning ability of Fully-connected neural networks (FNNs) by tailored CNNs, as long as the FNNs have block-sparse structures. Our theory is general in a sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. As an application, we derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and Hölder classes with the same strategy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-oono19a, title = {Approximation and non-parametric estimation of {R}es{N}et-type convolutional neural networks}, author = {Oono, Kenta and Suzuki, Taiji}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4922--4931}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/oono19a/oono19a.pdf}, url = {https://proceedings.mlr.press/v97/oono19a.html}, abstract = {Convolutional neural networks (CNNs) have been shown to achieve optimal approximation and estimation error rates (in minimax sense) in several function classes. However, previous analyzed optimal CNNs are unrealistically wide and difficult to obtain via optimization due to sparse constraints in important function classes, including the Hölder class. We show a ResNet-type CNN can attain the minimax optimal error rates in these classes in more plausible situations – it can be dense, and its width, channel size, and filter size are constant with respect to sample size. The key idea is that we can replicate the learning ability of Fully-connected neural networks (FNNs) by tailored CNNs, as long as the FNNs have block-sparse structures. Our theory is general in a sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. As an application, we derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and Hölder classes with the same strategy.} }
Endnote
%0 Conference Paper %T Approximation and non-parametric estimation of ResNet-type convolutional neural networks %A Kenta Oono %A Taiji Suzuki %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-oono19a %I PMLR %P 4922--4931 %U https://proceedings.mlr.press/v97/oono19a.html %V 97 %X Convolutional neural networks (CNNs) have been shown to achieve optimal approximation and estimation error rates (in minimax sense) in several function classes. However, previous analyzed optimal CNNs are unrealistically wide and difficult to obtain via optimization due to sparse constraints in important function classes, including the Hölder class. We show a ResNet-type CNN can attain the minimax optimal error rates in these classes in more plausible situations – it can be dense, and its width, channel size, and filter size are constant with respect to sample size. The key idea is that we can replicate the learning ability of Fully-connected neural networks (FNNs) by tailored CNNs, as long as the FNNs have block-sparse structures. Our theory is general in a sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. As an application, we derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and Hölder classes with the same strategy.
APA
Oono, K. & Suzuki, T.. (2019). Approximation and non-parametric estimation of ResNet-type convolutional neural networks. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4922-4931 Available from https://proceedings.mlr.press/v97/oono19a.html.

Related Material