Towards Understanding the Generalization Bias of Two Layer Convolutional Linear Classifiers with Gradient Descent

Yifan Wu, Barnabas Poczos, Aarti Singh
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:1070-1078, 2019.

Abstract

A major challenge in understanding the generalization of deep learning is to explain why (stochastic) gradient descent can exploit the network architecture to find solutions that have good generalization performance when using high capacity models. We find simple but realistic examples showing that this phenomenon exists even when learning linear classifiers — between two linear networks with the same capacity, the one with a convolutional layer can generalize better than the other when the data distribution has some underlying spatial structure. We argue that this difference results from a combination of the convolution architecture, data distribution and gradient descent, all of which are necessary to be included in a meaningful analysis. We analyze of the generalization performance as a function of data distribution and convolutional filter size, given gradient descent as the optimization algorithm, then interpret the results using concrete examples. Experimental results show that our analysis is able to explain what happens in our introduced examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-wu19b, title = {Towards Understanding the Generalization Bias of Two Layer Convolutional Linear Classifiers with Gradient Descent}, author = {Wu, Yifan and Poczos, Barnabas and Singh, Aarti}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {1070--1078}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/wu19b/wu19b.pdf}, url = {https://proceedings.mlr.press/v89/wu19b.html}, abstract = {A major challenge in understanding the generalization of deep learning is to explain why (stochastic) gradient descent can exploit the network architecture to find solutions that have good generalization performance when using high capacity models. We find simple but realistic examples showing that this phenomenon exists even when learning linear classifiers — between two linear networks with the same capacity, the one with a convolutional layer can generalize better than the other when the data distribution has some underlying spatial structure. We argue that this difference results from a combination of the convolution architecture, data distribution and gradient descent, all of which are necessary to be included in a meaningful analysis. We analyze of the generalization performance as a function of data distribution and convolutional filter size, given gradient descent as the optimization algorithm, then interpret the results using concrete examples. Experimental results show that our analysis is able to explain what happens in our introduced examples.} }
Endnote
%0 Conference Paper %T Towards Understanding the Generalization Bias of Two Layer Convolutional Linear Classifiers with Gradient Descent %A Yifan Wu %A Barnabas Poczos %A Aarti Singh %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-wu19b %I PMLR %P 1070--1078 %U https://proceedings.mlr.press/v89/wu19b.html %V 89 %X A major challenge in understanding the generalization of deep learning is to explain why (stochastic) gradient descent can exploit the network architecture to find solutions that have good generalization performance when using high capacity models. We find simple but realistic examples showing that this phenomenon exists even when learning linear classifiers — between two linear networks with the same capacity, the one with a convolutional layer can generalize better than the other when the data distribution has some underlying spatial structure. We argue that this difference results from a combination of the convolution architecture, data distribution and gradient descent, all of which are necessary to be included in a meaningful analysis. We analyze of the generalization performance as a function of data distribution and convolutional filter size, given gradient descent as the optimization algorithm, then interpret the results using concrete examples. Experimental results show that our analysis is able to explain what happens in our introduced examples.
APA
Wu, Y., Poczos, B. & Singh, A.. (2019). Towards Understanding the Generalization Bias of Two Layer Convolutional Linear Classifiers with Gradient Descent. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:1070-1078 Available from https://proceedings.mlr.press/v89/wu19b.html.

Related Material