But Are You Sure? An Uncertainty-Aware Perspective on Explainable AI

Charles Marx, Youngsuk Park, Hilaf Hasson, Yuyang Wang, Stefano Ermon, Luke Huan
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:7375-7391, 2023.

Abstract

Although black-box models can accurately predict outcomes such as weather patterns, they often lack transparency, making it challenging to extract meaningful insights (such as which atmospheric conditions signal future rainfall). Model explanations attempt to identify the essential features of a model, but these explanations can be inconsistent: two near-optimal models may admit vastly different explanations. In this paper, we propose a solution to this problem by constructing uncertainty sets for explanations of the optimal model(s) in both frequentist and Bayesian settings. Our uncertainty sets are guaranteed to include the explanation of the optimal model with high probability, even though this model is unknown. We demonstrate the effectiveness of our approach in both synthetic and real-world experiments, illustrating how our uncertainty sets can be used to calibrate trust in model explanations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-marx23a, title = {But Are You Sure? An Uncertainty-Aware Perspective on Explainable AI}, author = {Marx, Charles and Park, Youngsuk and Hasson, Hilaf and Wang, Yuyang and Ermon, Stefano and Huan, Luke}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {7375--7391}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/marx23a/marx23a.pdf}, url = {https://proceedings.mlr.press/v206/marx23a.html}, abstract = {Although black-box models can accurately predict outcomes such as weather patterns, they often lack transparency, making it challenging to extract meaningful insights (such as which atmospheric conditions signal future rainfall). Model explanations attempt to identify the essential features of a model, but these explanations can be inconsistent: two near-optimal models may admit vastly different explanations. In this paper, we propose a solution to this problem by constructing uncertainty sets for explanations of the optimal model(s) in both frequentist and Bayesian settings. Our uncertainty sets are guaranteed to include the explanation of the optimal model with high probability, even though this model is unknown. We demonstrate the effectiveness of our approach in both synthetic and real-world experiments, illustrating how our uncertainty sets can be used to calibrate trust in model explanations.} }
Endnote
%0 Conference Paper %T But Are You Sure? An Uncertainty-Aware Perspective on Explainable AI %A Charles Marx %A Youngsuk Park %A Hilaf Hasson %A Yuyang Wang %A Stefano Ermon %A Luke Huan %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-marx23a %I PMLR %P 7375--7391 %U https://proceedings.mlr.press/v206/marx23a.html %V 206 %X Although black-box models can accurately predict outcomes such as weather patterns, they often lack transparency, making it challenging to extract meaningful insights (such as which atmospheric conditions signal future rainfall). Model explanations attempt to identify the essential features of a model, but these explanations can be inconsistent: two near-optimal models may admit vastly different explanations. In this paper, we propose a solution to this problem by constructing uncertainty sets for explanations of the optimal model(s) in both frequentist and Bayesian settings. Our uncertainty sets are guaranteed to include the explanation of the optimal model with high probability, even though this model is unknown. We demonstrate the effectiveness of our approach in both synthetic and real-world experiments, illustrating how our uncertainty sets can be used to calibrate trust in model explanations.
APA
Marx, C., Park, Y., Hasson, H., Wang, Y., Ermon, S. & Huan, L.. (2023). But Are You Sure? An Uncertainty-Aware Perspective on Explainable AI. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:7375-7391 Available from https://proceedings.mlr.press/v206/marx23a.html.

Related Material