The Effect of Natural Distribution Shift on Question Answering Models

John Miller, Karl Krauth, Benjamin Recht, Ludwig Schmidt
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6905-6916, 2020.

Abstract

We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data. Our first test set is from the original Wikipedia domain and measures the extent to which existing systems overfit the original test set. Despite several years of heavy test set re-use, we find no evidence of adaptive overfitting. The remaining three test sets are constructed from New York Times articles, Reddit posts, and Amazon product reviews and measure robustness to natural distribution shifts. Across a broad range of models, we observe average performance drops of 3.8, 14.0, and 17.4 F1 points, respectively. In contrast, a strong human baseline matches or exceeds the performance of SQuAD models on the original domain and exhibits little to no drop in new domains. Taken together, our results confirm the surprising resilience of the holdout method and emphasize the need to move towards evaluation metrics that incorporate robustness to natural distribution shifts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-miller20a, title = {The Effect of Natural Distribution Shift on Question Answering Models}, author = {Miller, John and Krauth, Karl and Recht, Benjamin and Schmidt, Ludwig}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6905--6916}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/miller20a/miller20a.pdf}, url = {https://proceedings.mlr.press/v119/miller20a.html}, abstract = {We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data. Our first test set is from the original Wikipedia domain and measures the extent to which existing systems overfit the original test set. Despite several years of heavy test set re-use, we find no evidence of adaptive overfitting. The remaining three test sets are constructed from New York Times articles, Reddit posts, and Amazon product reviews and measure robustness to natural distribution shifts. Across a broad range of models, we observe average performance drops of 3.8, 14.0, and 17.4 F1 points, respectively. In contrast, a strong human baseline matches or exceeds the performance of SQuAD models on the original domain and exhibits little to no drop in new domains. Taken together, our results confirm the surprising resilience of the holdout method and emphasize the need to move towards evaluation metrics that incorporate robustness to natural distribution shifts.} }
Endnote
%0 Conference Paper %T The Effect of Natural Distribution Shift on Question Answering Models %A John Miller %A Karl Krauth %A Benjamin Recht %A Ludwig Schmidt %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-miller20a %I PMLR %P 6905--6916 %U https://proceedings.mlr.press/v119/miller20a.html %V 119 %X We build four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data. Our first test set is from the original Wikipedia domain and measures the extent to which existing systems overfit the original test set. Despite several years of heavy test set re-use, we find no evidence of adaptive overfitting. The remaining three test sets are constructed from New York Times articles, Reddit posts, and Amazon product reviews and measure robustness to natural distribution shifts. Across a broad range of models, we observe average performance drops of 3.8, 14.0, and 17.4 F1 points, respectively. In contrast, a strong human baseline matches or exceeds the performance of SQuAD models on the original domain and exhibits little to no drop in new domains. Taken together, our results confirm the surprising resilience of the holdout method and emphasize the need to move towards evaluation metrics that incorporate robustness to natural distribution shifts.
APA
Miller, J., Krauth, K., Recht, B. & Schmidt, L.. (2020). The Effect of Natural Distribution Shift on Question Answering Models. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6905-6916 Available from https://proceedings.mlr.press/v119/miller20a.html.

Related Material