Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning

Yuxiao Wen, Arthur Jacot
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:52779-52800, 2024.

Abstract

We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs. We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck, and partially prove that the parameter norm required to represent a function $f$ scales as depth times the CBN rank $f$. We also show that the parameter norm depends at next order on the regularity of $f$. We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights and - under the assumption that the network is stable under large learning rate - the activations, which motivates the common practice of down-sampling; and we verify that the CBN results still hold with down-sampling. Finally we use the CBN structure to interpret the functions learned by CNNs on a number of tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-wen24d, title = {Which Frequencies do {CNN}s Need? {E}mergent Bottleneck Structure in Feature Learning}, author = {Wen, Yuxiao and Jacot, Arthur}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {52779--52800}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/wen24d/wen24d.pdf}, url = {https://proceedings.mlr.press/v235/wen24d.html}, abstract = {We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs. We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck, and partially prove that the parameter norm required to represent a function $f$ scales as depth times the CBN rank $f$. We also show that the parameter norm depends at next order on the regularity of $f$. We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights and - under the assumption that the network is stable under large learning rate - the activations, which motivates the common practice of down-sampling; and we verify that the CBN results still hold with down-sampling. Finally we use the CBN structure to interpret the functions learned by CNNs on a number of tasks.} }
Endnote
%0 Conference Paper %T Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning %A Yuxiao Wen %A Arthur Jacot %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-wen24d %I PMLR %P 52779--52800 %U https://proceedings.mlr.press/v235/wen24d.html %V 235 %X We describe the emergence of a Convolution Bottleneck (CBN) structure in CNNs, where the network uses its first few layers to transform the input representation into a representation that is supported only along a few frequencies and channels, before using the last few layers to map back to the outputs. We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck, and partially prove that the parameter norm required to represent a function $f$ scales as depth times the CBN rank $f$. We also show that the parameter norm depends at next order on the regularity of $f$. We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights and - under the assumption that the network is stable under large learning rate - the activations, which motivates the common practice of down-sampling; and we verify that the CBN results still hold with down-sampling. Finally we use the CBN structure to interpret the functions learned by CNNs on a number of tasks.
APA
Wen, Y. & Jacot, A.. (2024). Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:52779-52800 Available from https://proceedings.mlr.press/v235/wen24d.html.

Related Material