Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise

Haotian Ye, James Zou, Linjun Zhang
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:8968-8990, 2023.

Abstract

The existence of spurious correlations such as image backgrounds in the training environment can make empirical risk minimization (ERM) perform badly in the test environment. To address this problem, Kirichenko et al. (2022) empirically found that the core features that are related to the outcome can still be learned well even with the presence of spurious correlations. This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment. However, a theoretical understanding of when and why this approach works is lacking. In this paper, we find that core features are only learned well when their associated non-realizable noise is smaller than that of spurious features, which is not necessarily true in practice. We provide both theories and experiments to support this finding and to illustrate the importance of non-realizable noise. Moreover, we propose an algorithm called Freeze then Train (FTT), that first freezes certain salient features and then trains the rest of the features using ERM. We theoretically show that FTT preserves features that are more beneficial to test time probing. Across two commonly used spurious correlation datasets, FTT outperforms ERM, IRM, JTT and CVaR-DRO, with substantial improvement in accuracy (by 4.5$%$) when the feature noise is large. FTT also performs better on general distribution shift benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-ye23a, title = {Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise}, author = {Ye, Haotian and Zou, James and Zhang, Linjun}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {8968--8990}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/ye23a/ye23a.pdf}, url = {https://proceedings.mlr.press/v206/ye23a.html}, abstract = {The existence of spurious correlations such as image backgrounds in the training environment can make empirical risk minimization (ERM) perform badly in the test environment. To address this problem, Kirichenko et al. (2022) empirically found that the core features that are related to the outcome can still be learned well even with the presence of spurious correlations. This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment. However, a theoretical understanding of when and why this approach works is lacking. In this paper, we find that core features are only learned well when their associated non-realizable noise is smaller than that of spurious features, which is not necessarily true in practice. We provide both theories and experiments to support this finding and to illustrate the importance of non-realizable noise. Moreover, we propose an algorithm called Freeze then Train (FTT), that first freezes certain salient features and then trains the rest of the features using ERM. We theoretically show that FTT preserves features that are more beneficial to test time probing. Across two commonly used spurious correlation datasets, FTT outperforms ERM, IRM, JTT and CVaR-DRO, with substantial improvement in accuracy (by 4.5$%$) when the feature noise is large. FTT also performs better on general distribution shift benchmarks.} }
Endnote
%0 Conference Paper %T Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise %A Haotian Ye %A James Zou %A Linjun Zhang %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-ye23a %I PMLR %P 8968--8990 %U https://proceedings.mlr.press/v206/ye23a.html %V 206 %X The existence of spurious correlations such as image backgrounds in the training environment can make empirical risk minimization (ERM) perform badly in the test environment. To address this problem, Kirichenko et al. (2022) empirically found that the core features that are related to the outcome can still be learned well even with the presence of spurious correlations. This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment. However, a theoretical understanding of when and why this approach works is lacking. In this paper, we find that core features are only learned well when their associated non-realizable noise is smaller than that of spurious features, which is not necessarily true in practice. We provide both theories and experiments to support this finding and to illustrate the importance of non-realizable noise. Moreover, we propose an algorithm called Freeze then Train (FTT), that first freezes certain salient features and then trains the rest of the features using ERM. We theoretically show that FTT preserves features that are more beneficial to test time probing. Across two commonly used spurious correlation datasets, FTT outperforms ERM, IRM, JTT and CVaR-DRO, with substantial improvement in accuracy (by 4.5$%$) when the feature noise is large. FTT also performs better on general distribution shift benchmarks.
APA
Ye, H., Zou, J. & Zhang, L.. (2023). Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:8968-8990 Available from https://proceedings.mlr.press/v206/ye23a.html.

Related Material