Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data

Nikita Tsoy, Nikola Konstantinov
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:48728-48767, 2024.

Abstract

Simplicity bias, the propensity of deep models to over-rely on simple features, has been identified as a potential reason for limited out-of-distribution generalization of neural networks (Shah et al., 2020). Despite the important implications, this phenomenon has been theoretically confirmed and characterized only under strong dataset assumptions, such as linear separability (Lyu et al., 2021). In this work, we characterize simplicity bias for general datasets in the context of two-layer neural networks initialized with small weights and trained with gradient flow. Specifically, we prove that in the early training phases, network features cluster around a few directions that do not depend on the size of the hidden layer. Furthermore, for datasets with an XOR-like pattern, we precisely identify the learned features and demonstrate that simplicity bias intensifies during later training stages. These results indicate that features learned in the middle stages of training may be more useful for OOD transfer. We support this hypothesis with experiments on image data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-tsoy24a, title = {Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data}, author = {Tsoy, Nikita and Konstantinov, Nikola}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {48728--48767}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/tsoy24a/tsoy24a.pdf}, url = {https://proceedings.mlr.press/v235/tsoy24a.html}, abstract = {Simplicity bias, the propensity of deep models to over-rely on simple features, has been identified as a potential reason for limited out-of-distribution generalization of neural networks (Shah et al., 2020). Despite the important implications, this phenomenon has been theoretically confirmed and characterized only under strong dataset assumptions, such as linear separability (Lyu et al., 2021). In this work, we characterize simplicity bias for general datasets in the context of two-layer neural networks initialized with small weights and trained with gradient flow. Specifically, we prove that in the early training phases, network features cluster around a few directions that do not depend on the size of the hidden layer. Furthermore, for datasets with an XOR-like pattern, we precisely identify the learned features and demonstrate that simplicity bias intensifies during later training stages. These results indicate that features learned in the middle stages of training may be more useful for OOD transfer. We support this hypothesis with experiments on image data.} }
Endnote
%0 Conference Paper %T Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data %A Nikita Tsoy %A Nikola Konstantinov %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-tsoy24a %I PMLR %P 48728--48767 %U https://proceedings.mlr.press/v235/tsoy24a.html %V 235 %X Simplicity bias, the propensity of deep models to over-rely on simple features, has been identified as a potential reason for limited out-of-distribution generalization of neural networks (Shah et al., 2020). Despite the important implications, this phenomenon has been theoretically confirmed and characterized only under strong dataset assumptions, such as linear separability (Lyu et al., 2021). In this work, we characterize simplicity bias for general datasets in the context of two-layer neural networks initialized with small weights and trained with gradient flow. Specifically, we prove that in the early training phases, network features cluster around a few directions that do not depend on the size of the hidden layer. Furthermore, for datasets with an XOR-like pattern, we precisely identify the learned features and demonstrate that simplicity bias intensifies during later training stages. These results indicate that features learned in the middle stages of training may be more useful for OOD transfer. We support this hypothesis with experiments on image data.
APA
Tsoy, N. & Konstantinov, N.. (2024). Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:48728-48767 Available from https://proceedings.mlr.press/v235/tsoy24a.html.

Related Material