On the Spectral Bias of Neural Networks
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:53015310, 2019.
Abstract
Neural networks are known to be a class of highly expressive functions able to fit even random inputoutput mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we highlight a learning bias of deep networks towards low frequency functions – i.e. functions that vary globally without local fluctuations – which manifests itself as a frequencydependent learning speed. Intuitively, this property is in line with the observation that overparameterized networks prioritize learning simple patterns that generalize across data samples. We also investigate the role of the shape of the data manifold by presenting empirical and theoretical evidence that, somewhat counterintuitively, learning higher frequencies gets easier with increasing manifold complexity.
Related Material


