Identifying Statistical Bias in Dataset Replication

Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2922-2932, 2020.

Abstract

Dataset replication is a useful tool for assessing whether improvements in test accuracy on a specific benchmark correspond to improvements in models’ ability to generalize reliably. In this work, we present unintuitive yet significant ways in which standard approaches to dataset replication introduce statistical bias, skewing the resulting observations. We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for selection frequency, a human-in-the-loop measure of data quality. We show that after remeasuring selection frequencies and correcting for statistical bias, only an estimated 3.6% of the original 11.7% accuracy drop remains unaccounted for. We conclude with concrete recommendations for recognizing and avoiding bias in dataset replication. Code for our study is publicly available: https://git.io/data-rep-analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-engstrom20a, title = {Identifying Statistical Bias in Dataset Replication}, author = {Engstrom, Logan and Ilyas, Andrew and Santurkar, Shibani and Tsipras, Dimitris and Steinhardt, Jacob and Madry, Aleksander}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2922--2932}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/engstrom20a/engstrom20a.pdf}, url = {https://proceedings.mlr.press/v119/engstrom20a.html}, abstract = {Dataset replication is a useful tool for assessing whether improvements in test accuracy on a specific benchmark correspond to improvements in models’ ability to generalize reliably. In this work, we present unintuitive yet significant ways in which standard approaches to dataset replication introduce statistical bias, skewing the resulting observations. We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for selection frequency, a human-in-the-loop measure of data quality. We show that after remeasuring selection frequencies and correcting for statistical bias, only an estimated 3.6% of the original 11.7% accuracy drop remains unaccounted for. We conclude with concrete recommendations for recognizing and avoiding bias in dataset replication. Code for our study is publicly available: https://git.io/data-rep-analysis.} }
Endnote
%0 Conference Paper %T Identifying Statistical Bias in Dataset Replication %A Logan Engstrom %A Andrew Ilyas %A Shibani Santurkar %A Dimitris Tsipras %A Jacob Steinhardt %A Aleksander Madry %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-engstrom20a %I PMLR %P 2922--2932 %U https://proceedings.mlr.press/v119/engstrom20a.html %V 119 %X Dataset replication is a useful tool for assessing whether improvements in test accuracy on a specific benchmark correspond to improvements in models’ ability to generalize reliably. In this work, we present unintuitive yet significant ways in which standard approaches to dataset replication introduce statistical bias, skewing the resulting observations. We study ImageNet-v2, a replication of the ImageNet dataset on which models exhibit a significant (11-14%) drop in accuracy, even after controlling for selection frequency, a human-in-the-loop measure of data quality. We show that after remeasuring selection frequencies and correcting for statistical bias, only an estimated 3.6% of the original 11.7% accuracy drop remains unaccounted for. We conclude with concrete recommendations for recognizing and avoiding bias in dataset replication. Code for our study is publicly available: https://git.io/data-rep-analysis.
APA
Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Steinhardt, J. & Madry, A.. (2020). Identifying Statistical Bias in Dataset Replication. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2922-2932 Available from https://proceedings.mlr.press/v119/engstrom20a.html.

Related Material