Contextual Reliability: When Different Features Matter in Different Contexts

Gaurav Rohit Ghosal, Amrith Setlur, Daniel S. Brown, Anca Dragan, Aditi Raghunathan
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:11300-11320, 2023.

Abstract

Deep neural networks often fail catastrophically by relying on spurious correlations. Most prior work assumes a clear dichotomy into spurious and reliable features; however, this is often unrealistic. For example, most of the time we do not want an autonomous car to simply copy the speed of surrounding cars—we don’t want our car to run a red light if a neighboring car does so. However, we cannot simply enforce invariance to next-lane speed, since it could provide valuable information about an unobservable pedestrian at a crosswalk. Thus, universally ignoring features that are sometimes (but not always) reliable can lead to non-robust performance. We formalize a new setting called contextual reliability which accounts for the fact that the "right" features to use may vary depending on the context. We propose and analyze a two-stage framework called Explicit Non-spurious feature Prediction (ENP) which first identifies the relevant features to use for a given context, then trains a model to rely exclusively on these features. Our work theoretically and empirically demonstrates the advantages of ENP over existing methods and provides new benchmarks for contextual reliability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ghosal23a, title = {Contextual Reliability: When Different Features Matter in Different Contexts}, author = {Ghosal, Gaurav Rohit and Setlur, Amrith and Brown, Daniel S. and Dragan, Anca and Raghunathan, Aditi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {11300--11320}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ghosal23a/ghosal23a.pdf}, url = {https://proceedings.mlr.press/v202/ghosal23a.html}, abstract = {Deep neural networks often fail catastrophically by relying on spurious correlations. Most prior work assumes a clear dichotomy into spurious and reliable features; however, this is often unrealistic. For example, most of the time we do not want an autonomous car to simply copy the speed of surrounding cars—we don’t want our car to run a red light if a neighboring car does so. However, we cannot simply enforce invariance to next-lane speed, since it could provide valuable information about an unobservable pedestrian at a crosswalk. Thus, universally ignoring features that are sometimes (but not always) reliable can lead to non-robust performance. We formalize a new setting called contextual reliability which accounts for the fact that the "right" features to use may vary depending on the context. We propose and analyze a two-stage framework called Explicit Non-spurious feature Prediction (ENP) which first identifies the relevant features to use for a given context, then trains a model to rely exclusively on these features. Our work theoretically and empirically demonstrates the advantages of ENP over existing methods and provides new benchmarks for contextual reliability.} }
Endnote
%0 Conference Paper %T Contextual Reliability: When Different Features Matter in Different Contexts %A Gaurav Rohit Ghosal %A Amrith Setlur %A Daniel S. Brown %A Anca Dragan %A Aditi Raghunathan %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ghosal23a %I PMLR %P 11300--11320 %U https://proceedings.mlr.press/v202/ghosal23a.html %V 202 %X Deep neural networks often fail catastrophically by relying on spurious correlations. Most prior work assumes a clear dichotomy into spurious and reliable features; however, this is often unrealistic. For example, most of the time we do not want an autonomous car to simply copy the speed of surrounding cars—we don’t want our car to run a red light if a neighboring car does so. However, we cannot simply enforce invariance to next-lane speed, since it could provide valuable information about an unobservable pedestrian at a crosswalk. Thus, universally ignoring features that are sometimes (but not always) reliable can lead to non-robust performance. We formalize a new setting called contextual reliability which accounts for the fact that the "right" features to use may vary depending on the context. We propose and analyze a two-stage framework called Explicit Non-spurious feature Prediction (ENP) which first identifies the relevant features to use for a given context, then trains a model to rely exclusively on these features. Our work theoretically and empirically demonstrates the advantages of ENP over existing methods and provides new benchmarks for contextual reliability.
APA
Ghosal, G.R., Setlur, A., Brown, D.S., Dragan, A. & Raghunathan, A.. (2023). Contextual Reliability: When Different Features Matter in Different Contexts. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:11300-11320 Available from https://proceedings.mlr.press/v202/ghosal23a.html.

Related Material