What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective

Rhea Chowers, Yair Weiss
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:6115-6139, 2023.

Abstract

It has previously been reported that the representation that is learned in the first layer of deep Convolutional Neural Networks (CNNs) is highly consistent across initializations and architectures. In this work, we quantify this consistency by considering the first layer as a filter bank and measuring its energy distribution. We find that the energy distribution is very different from that of the initial weights and is remarkably consistent across random initializations, datasets, architectures and even when the CNNs are trained with random labels. In order to explain this consistency, we derive an analytical formula for the energy profile of linear CNNs and show that this profile is mostly dictated by the second order statistics of image patches in the training set and it will approach a whitening transformation when the number of iterations goes to infinity. Finally, we show that this formula for linear CNNs also gives an excellent fit for the energy profiles learned by commonly used nonlinear CNNs such as ResNet and VGG, and that the first layer of these CNNs indeed performs approximate whitening of their inputs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-chowers23a, title = {What do {CNN}s Learn in the First Layer and Why? {A} Linear Systems Perspective}, author = {Chowers, Rhea and Weiss, Yair}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {6115--6139}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/chowers23a/chowers23a.pdf}, url = {https://proceedings.mlr.press/v202/chowers23a.html}, abstract = {It has previously been reported that the representation that is learned in the first layer of deep Convolutional Neural Networks (CNNs) is highly consistent across initializations and architectures. In this work, we quantify this consistency by considering the first layer as a filter bank and measuring its energy distribution. We find that the energy distribution is very different from that of the initial weights and is remarkably consistent across random initializations, datasets, architectures and even when the CNNs are trained with random labels. In order to explain this consistency, we derive an analytical formula for the energy profile of linear CNNs and show that this profile is mostly dictated by the second order statistics of image patches in the training set and it will approach a whitening transformation when the number of iterations goes to infinity. Finally, we show that this formula for linear CNNs also gives an excellent fit for the energy profiles learned by commonly used nonlinear CNNs such as ResNet and VGG, and that the first layer of these CNNs indeed performs approximate whitening of their inputs.} }
Endnote
%0 Conference Paper %T What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective %A Rhea Chowers %A Yair Weiss %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-chowers23a %I PMLR %P 6115--6139 %U https://proceedings.mlr.press/v202/chowers23a.html %V 202 %X It has previously been reported that the representation that is learned in the first layer of deep Convolutional Neural Networks (CNNs) is highly consistent across initializations and architectures. In this work, we quantify this consistency by considering the first layer as a filter bank and measuring its energy distribution. We find that the energy distribution is very different from that of the initial weights and is remarkably consistent across random initializations, datasets, architectures and even when the CNNs are trained with random labels. In order to explain this consistency, we derive an analytical formula for the energy profile of linear CNNs and show that this profile is mostly dictated by the second order statistics of image patches in the training set and it will approach a whitening transformation when the number of iterations goes to infinity. Finally, we show that this formula for linear CNNs also gives an excellent fit for the energy profiles learned by commonly used nonlinear CNNs such as ResNet and VGG, and that the first layer of these CNNs indeed performs approximate whitening of their inputs.
APA
Chowers, R. & Weiss, Y.. (2023). What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:6115-6139 Available from https://proceedings.mlr.press/v202/chowers23a.html.

Related Material