Why does Throwing Away Data Improve Worst-Group Error?

Kamalika Chaudhuri, Kartik Ahuja, Martin Arjovsky, David Lopez-Paz
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:4144-4188, 2023.

Abstract

When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results. They throw away examples until the classes or groups are balanced in size, and then perform empirical risk minimization on the reduced training set. This opposes common wisdom in learning theory, where the expected error is supposed to decrease as the dataset grows in size. In this work, we leverage extreme value theory to address this apparent contradiction. Our results show that the tails of the data distribution play an important role in determining the worst-group-accuracy of linear classifiers. When learning on data with heavy tails, throwing away data restores the geometric symmetry of the resulting classifier, and therefore improves its worst-group generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-chaudhuri23a, title = {Why does Throwing Away Data Improve Worst-Group Error?}, author = {Chaudhuri, Kamalika and Ahuja, Kartik and Arjovsky, Martin and Lopez-Paz, David}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {4144--4188}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/chaudhuri23a/chaudhuri23a.pdf}, url = {https://proceedings.mlr.press/v202/chaudhuri23a.html}, abstract = {When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results. They throw away examples until the classes or groups are balanced in size, and then perform empirical risk minimization on the reduced training set. This opposes common wisdom in learning theory, where the expected error is supposed to decrease as the dataset grows in size. In this work, we leverage extreme value theory to address this apparent contradiction. Our results show that the tails of the data distribution play an important role in determining the worst-group-accuracy of linear classifiers. When learning on data with heavy tails, throwing away data restores the geometric symmetry of the resulting classifier, and therefore improves its worst-group generalization.} }
Endnote
%0 Conference Paper %T Why does Throwing Away Data Improve Worst-Group Error? %A Kamalika Chaudhuri %A Kartik Ahuja %A Martin Arjovsky %A David Lopez-Paz %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-chaudhuri23a %I PMLR %P 4144--4188 %U https://proceedings.mlr.press/v202/chaudhuri23a.html %V 202 %X When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results. They throw away examples until the classes or groups are balanced in size, and then perform empirical risk minimization on the reduced training set. This opposes common wisdom in learning theory, where the expected error is supposed to decrease as the dataset grows in size. In this work, we leverage extreme value theory to address this apparent contradiction. Our results show that the tails of the data distribution play an important role in determining the worst-group-accuracy of linear classifiers. When learning on data with heavy tails, throwing away data restores the geometric symmetry of the resulting classifier, and therefore improves its worst-group generalization.
APA
Chaudhuri, K., Ahuja, K., Arjovsky, M. & Lopez-Paz, D.. (2023). Why does Throwing Away Data Improve Worst-Group Error?. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:4144-4188 Available from https://proceedings.mlr.press/v202/chaudhuri23a.html.

Related Material