Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks

Lechao Xiao
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:4888-4944, 2022.

Abstract

Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. However, due to the highly complex nature of the problem, progress has been relatively slow. In this note, through the lens of infinite-width networks, a.k.a. neural kernels, we present one such principle resulting from hierarchical localities. It is well-known that the eigenstructure of infinite-width multilayer perceptrons (MLPs) depends solely on the concept {\it frequency}, which measures the order of interactions. We show that the topologies from deep convolutional networks (CNNs) restructure the associated eigenspaces into finer subspaces. In addition to frequency, the new structure also depends on the concept {\it space}, which measures the spatial distance among nonlinear interaction terms. The resulting fine-grained eigenstructure dramatically improves the network’s learnability, empowering them to simultaneously model a much richer class of interactions. including Long-Range-Low-Frequency interactions, Short-Range-High-Frequency interactions, and various interpolations and extrapolations in-between. Additionally, model scaling can improve the resolutions of interpolations and extrapolations and, therefore, the network’s learnability. Finally, we prove a sharp characterization of the generalization error for infinite-width CNNs (aka C-NTK and CNN-GP) of any depth in the high-dimensional setting. Two corollaries follow: (1) infinite-width deep CNNs can overcome the curse of dimensionality without losing their expressivity, and (2) scaling improves performance in both the finite and infinite data regimes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-xiao22a, title = {Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks}, author = {Xiao, Lechao}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {4888--4944}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/xiao22a/xiao22a.pdf}, url = {https://proceedings.mlr.press/v178/xiao22a.html}, abstract = {Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. However, due to the highly complex nature of the problem, progress has been relatively slow. In this note, through the lens of infinite-width networks, a.k.a. neural kernels, we present one such principle resulting from hierarchical localities. It is well-known that the eigenstructure of infinite-width multilayer perceptrons (MLPs) depends solely on the concept {\it frequency}, which measures the order of interactions. We show that the topologies from deep convolutional networks (CNNs) restructure the associated eigenspaces into finer subspaces. In addition to frequency, the new structure also depends on the concept {\it space}, which measures the spatial distance among nonlinear interaction terms. The resulting fine-grained eigenstructure dramatically improves the network’s learnability, empowering them to simultaneously model a much richer class of interactions. including Long-Range-Low-Frequency interactions, Short-Range-High-Frequency interactions, and various interpolations and extrapolations in-between. Additionally, model scaling can improve the resolutions of interpolations and extrapolations and, therefore, the network’s learnability. Finally, we prove a sharp characterization of the generalization error for infinite-width CNNs (aka C-NTK and CNN-GP) of any depth in the high-dimensional setting. Two corollaries follow: (1) infinite-width deep CNNs can overcome the curse of dimensionality without losing their expressivity, and (2) scaling improves performance in both the finite and infinite data regimes.} }
Endnote
%0 Conference Paper %T Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks %A Lechao Xiao %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-xiao22a %I PMLR %P 4888--4944 %U https://proceedings.mlr.press/v178/xiao22a.html %V 178 %X Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. However, due to the highly complex nature of the problem, progress has been relatively slow. In this note, through the lens of infinite-width networks, a.k.a. neural kernels, we present one such principle resulting from hierarchical localities. It is well-known that the eigenstructure of infinite-width multilayer perceptrons (MLPs) depends solely on the concept {\it frequency}, which measures the order of interactions. We show that the topologies from deep convolutional networks (CNNs) restructure the associated eigenspaces into finer subspaces. In addition to frequency, the new structure also depends on the concept {\it space}, which measures the spatial distance among nonlinear interaction terms. The resulting fine-grained eigenstructure dramatically improves the network’s learnability, empowering them to simultaneously model a much richer class of interactions. including Long-Range-Low-Frequency interactions, Short-Range-High-Frequency interactions, and various interpolations and extrapolations in-between. Additionally, model scaling can improve the resolutions of interpolations and extrapolations and, therefore, the network’s learnability. Finally, we prove a sharp characterization of the generalization error for infinite-width CNNs (aka C-NTK and CNN-GP) of any depth in the high-dimensional setting. Two corollaries follow: (1) infinite-width deep CNNs can overcome the curse of dimensionality without losing their expressivity, and (2) scaling improves performance in both the finite and infinite data regimes.
APA
Xiao, L.. (2022). Eigenspace Restructuring: A Principle of Space and Frequency in Neural Networks. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:4888-4944 Available from https://proceedings.mlr.press/v178/xiao22a.html.

Related Material