Revisiting Spatial Invariance with Low-Rank Local Connectivity

Gamaleldin Elsayed, Prajit Ramachandran, Jonathon Shlens, Simon Kornblith
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2868-2879, 2020.

Abstract

Convolutional neural networks are among the most successful architectures in deep learning with this success at least partially attributable to the efficacy of spatial invariance as an inductive bias. Locally connected layers, which differ from convolutional layers only in their lack of spatial invariance, usually perform poorly in practice. However, these observations still leave open the possibility that some degree of relaxation of spatial invariance may yield a better inductive bias than either convolution or local connectivity. To test this hypothesis, we design a method to relax the spatial invariance of a network layer in a controlled manner; we create a \emph{low-rank} locally connected layer, where the filter bank applied at each position is constructed as a linear combination of basis set of filter banks with spatially varying combining weights. By varying the number of basis filter banks, we can control the degree of relaxation of spatial invariance. In experiments with small convolutional networks, we find that relaxing spatial invariance improves classification accuracy over both convolution and locally connected layers across MNIST, CIFAR-10, and CelebA datasets, thus suggesting that spatial invariance may be an overly restrictive prior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-elsayed20a, title = {Revisiting Spatial Invariance with Low-Rank Local Connectivity}, author = {Elsayed, Gamaleldin and Ramachandran, Prajit and Shlens, Jonathon and Kornblith, Simon}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2868--2879}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/elsayed20a/elsayed20a.pdf}, url = {https://proceedings.mlr.press/v119/elsayed20a.html}, abstract = {Convolutional neural networks are among the most successful architectures in deep learning with this success at least partially attributable to the efficacy of spatial invariance as an inductive bias. Locally connected layers, which differ from convolutional layers only in their lack of spatial invariance, usually perform poorly in practice. However, these observations still leave open the possibility that some degree of relaxation of spatial invariance may yield a better inductive bias than either convolution or local connectivity. To test this hypothesis, we design a method to relax the spatial invariance of a network layer in a controlled manner; we create a \emph{low-rank} locally connected layer, where the filter bank applied at each position is constructed as a linear combination of basis set of filter banks with spatially varying combining weights. By varying the number of basis filter banks, we can control the degree of relaxation of spatial invariance. In experiments with small convolutional networks, we find that relaxing spatial invariance improves classification accuracy over both convolution and locally connected layers across MNIST, CIFAR-10, and CelebA datasets, thus suggesting that spatial invariance may be an overly restrictive prior.} }
Endnote
%0 Conference Paper %T Revisiting Spatial Invariance with Low-Rank Local Connectivity %A Gamaleldin Elsayed %A Prajit Ramachandran %A Jonathon Shlens %A Simon Kornblith %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-elsayed20a %I PMLR %P 2868--2879 %U https://proceedings.mlr.press/v119/elsayed20a.html %V 119 %X Convolutional neural networks are among the most successful architectures in deep learning with this success at least partially attributable to the efficacy of spatial invariance as an inductive bias. Locally connected layers, which differ from convolutional layers only in their lack of spatial invariance, usually perform poorly in practice. However, these observations still leave open the possibility that some degree of relaxation of spatial invariance may yield a better inductive bias than either convolution or local connectivity. To test this hypothesis, we design a method to relax the spatial invariance of a network layer in a controlled manner; we create a \emph{low-rank} locally connected layer, where the filter bank applied at each position is constructed as a linear combination of basis set of filter banks with spatially varying combining weights. By varying the number of basis filter banks, we can control the degree of relaxation of spatial invariance. In experiments with small convolutional networks, we find that relaxing spatial invariance improves classification accuracy over both convolution and locally connected layers across MNIST, CIFAR-10, and CelebA datasets, thus suggesting that spatial invariance may be an overly restrictive prior.
APA
Elsayed, G., Ramachandran, P., Shlens, J. & Kornblith, S.. (2020). Revisiting Spatial Invariance with Low-Rank Local Connectivity. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2868-2879 Available from https://proceedings.mlr.press/v119/elsayed20a.html.

Related Material