Understanding Generalization and Optimization Performance of Deep CNNs

Pan Zhou, Jiashi Feng
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5960-5969, 2018.

Abstract

This work aims to provide understandings on the remarkable success of deep convolutional neural networks (CNNs) by theoretically analyzing their generalization performance and establishing optimization guarantees for gradient descent based training algorithms. Specifically, for a CNN model consisting of $l$ convolutional layers and one fully connected layer, we prove that its generalization error is bounded by $\mathcal{O}(\sqrt{\theta\widetilde{\varrho}/n})$ where $\theta$ denotes freedom degree of the network parameters and $\widetilde{\varrho}=\mathcal{O}(\log(\prod_{i=1}^{l}b_{i} (k_{i}-s_{i}+1)/p)+\log(b_{l+1}))$ encapsulates architecture parameters including the kernel size $k_{i}$, stride $s_{i}$, pooling size $p$ and parameter magnitude $b_{i}$. To our best knowledge, this is the first generalization bound that only depends on $\mathcal{O}(\log(\prod_{i=1}^{l+1}b_{i}))$, tighter than existing ones that all involve an exponential term like $\mathcal{O}(\prod_{i=1}^{l+1}b_{i})$. Besides, we prove that for an arbitrary gradient descent algorithm, the computed approximate stationary point by minimizing empirical risk is also an approximate stationary point to the population risk. This well explains why gradient descent training algorithms usually perform sufficiently well in practice. Furthermore, we prove the one-to-one correspondence and convergence guarantees for the non-degenerate stationary points between the empirical and population risks. It implies that the computed local minimum for the empirical risk is also close to a local minimum for the population risk, thus ensuring that the optimized CNN model well generalizes to new data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-zhou18a, title = {Understanding Generalization and Optimization Performance of Deep {CNN}s}, author = {Zhou, Pan and Feng, Jiashi}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5960--5969}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/zhou18a/zhou18a.pdf}, url = {https://proceedings.mlr.press/v80/zhou18a.html}, abstract = {This work aims to provide understandings on the remarkable success of deep convolutional neural networks (CNNs) by theoretically analyzing their generalization performance and establishing optimization guarantees for gradient descent based training algorithms. Specifically, for a CNN model consisting of $l$ convolutional layers and one fully connected layer, we prove that its generalization error is bounded by $\mathcal{O}(\sqrt{\theta\widetilde{\varrho}/n})$ where $\theta$ denotes freedom degree of the network parameters and $\widetilde{\varrho}=\mathcal{O}(\log(\prod_{i=1}^{l}b_{i} (k_{i}-s_{i}+1)/p)+\log(b_{l+1}))$ encapsulates architecture parameters including the kernel size $k_{i}$, stride $s_{i}$, pooling size $p$ and parameter magnitude $b_{i}$. To our best knowledge, this is the first generalization bound that only depends on $\mathcal{O}(\log(\prod_{i=1}^{l+1}b_{i}))$, tighter than existing ones that all involve an exponential term like $\mathcal{O}(\prod_{i=1}^{l+1}b_{i})$. Besides, we prove that for an arbitrary gradient descent algorithm, the computed approximate stationary point by minimizing empirical risk is also an approximate stationary point to the population risk. This well explains why gradient descent training algorithms usually perform sufficiently well in practice. Furthermore, we prove the one-to-one correspondence and convergence guarantees for the non-degenerate stationary points between the empirical and population risks. It implies that the computed local minimum for the empirical risk is also close to a local minimum for the population risk, thus ensuring that the optimized CNN model well generalizes to new data.} }
Endnote
%0 Conference Paper %T Understanding Generalization and Optimization Performance of Deep CNNs %A Pan Zhou %A Jiashi Feng %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-zhou18a %I PMLR %P 5960--5969 %U https://proceedings.mlr.press/v80/zhou18a.html %V 80 %X This work aims to provide understandings on the remarkable success of deep convolutional neural networks (CNNs) by theoretically analyzing their generalization performance and establishing optimization guarantees for gradient descent based training algorithms. Specifically, for a CNN model consisting of $l$ convolutional layers and one fully connected layer, we prove that its generalization error is bounded by $\mathcal{O}(\sqrt{\theta\widetilde{\varrho}/n})$ where $\theta$ denotes freedom degree of the network parameters and $\widetilde{\varrho}=\mathcal{O}(\log(\prod_{i=1}^{l}b_{i} (k_{i}-s_{i}+1)/p)+\log(b_{l+1}))$ encapsulates architecture parameters including the kernel size $k_{i}$, stride $s_{i}$, pooling size $p$ and parameter magnitude $b_{i}$. To our best knowledge, this is the first generalization bound that only depends on $\mathcal{O}(\log(\prod_{i=1}^{l+1}b_{i}))$, tighter than existing ones that all involve an exponential term like $\mathcal{O}(\prod_{i=1}^{l+1}b_{i})$. Besides, we prove that for an arbitrary gradient descent algorithm, the computed approximate stationary point by minimizing empirical risk is also an approximate stationary point to the population risk. This well explains why gradient descent training algorithms usually perform sufficiently well in practice. Furthermore, we prove the one-to-one correspondence and convergence guarantees for the non-degenerate stationary points between the empirical and population risks. It implies that the computed local minimum for the empirical risk is also close to a local minimum for the population risk, thus ensuring that the optimized CNN model well generalizes to new data.
APA
Zhou, P. & Feng, J.. (2018). Understanding Generalization and Optimization Performance of Deep CNNs. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5960-5969 Available from https://proceedings.mlr.press/v80/zhou18a.html.

Related Material