Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias

Yu Yang, Eric Gan, Gintare Karolina Dziugaite, Baharan Mirzasoleiman
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2953-2961, 2024.

Abstract

Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model’s output early in training. We further illustrate that if spurious features have a small enough noise-to-signal ratio, the network’s output on majority of examples is almost exclusively determined by the spurious features, leading to poor worst-group test accuracy. Finally, we propose SPARE, which identifies spurious correlations early in training, and utilizes importance sampling to alleviate their effect. Empirically, we demonstrate that SPARE outperforms state-of-the-art methods by up to 21.1% in worst-group accuracy, while being up to 12x faster. We also show the applicability of SPARE, as a highly effective but lightweight method, to discover spurious correlations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-yang24c, title = {Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias}, author = {Yang, Yu and Gan, Eric and Karolina Dziugaite, Gintare and Mirzasoleiman, Baharan}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2953--2961}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/yang24c/yang24c.pdf}, url = {https://proceedings.mlr.press/v238/yang24c.html}, abstract = {Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model’s output early in training. We further illustrate that if spurious features have a small enough noise-to-signal ratio, the network’s output on majority of examples is almost exclusively determined by the spurious features, leading to poor worst-group test accuracy. Finally, we propose SPARE, which identifies spurious correlations early in training, and utilizes importance sampling to alleviate their effect. Empirically, we demonstrate that SPARE outperforms state-of-the-art methods by up to 21.1% in worst-group accuracy, while being up to 12x faster. We also show the applicability of SPARE, as a highly effective but lightweight method, to discover spurious correlations.} }
Endnote
%0 Conference Paper %T Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias %A Yu Yang %A Eric Gan %A Gintare Karolina Dziugaite %A Baharan Mirzasoleiman %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-yang24c %I PMLR %P 2953--2961 %U https://proceedings.mlr.press/v238/yang24c.html %V 238 %X Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model’s output early in training. We further illustrate that if spurious features have a small enough noise-to-signal ratio, the network’s output on majority of examples is almost exclusively determined by the spurious features, leading to poor worst-group test accuracy. Finally, we propose SPARE, which identifies spurious correlations early in training, and utilizes importance sampling to alleviate their effect. Empirically, we demonstrate that SPARE outperforms state-of-the-art methods by up to 21.1% in worst-group accuracy, while being up to 12x faster. We also show the applicability of SPARE, as a highly effective but lightweight method, to discover spurious correlations.
APA
Yang, Y., Gan, E., Karolina Dziugaite, G. & Mirzasoleiman, B.. (2024). Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2953-2961 Available from https://proceedings.mlr.press/v238/yang24c.html.

Related Material