Prospective Explanations: An Interactive Mechanism for Model Understanding

Rahul Nair, Pierpaolo Tommasi
Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, PMLR 176:273-277, 2022.

Abstract

We demonstrate a system for prospective explanations of black box models for regression and classification tasks with structured data. Prospective explanations are aimed at showing how models function by highlighting likely changes in model outcomes under changes in input. This in contrast to most post-hoc explanability methods, that aim to provide a justification for a decision retrospectively. To do so, we employ a surrogate Bayesian network model and learn dependencies through a structure learning task. Our system is designed to provide fast estimates of changes in outcomes for any arbitrary exploratory query from users. Such queries are typical partial, i.e. involve only a selected number of features, the outcomes labels are shown therefore as likelihoods. Repeated queries can indicate which aspects of the feature space are more likely to influence the target variable. We demonstrate the system from a real-world application from the humanitarian sector and show the value of bayesian network surrogates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v176-nair22a, title = {Prospective Explanations: An Interactive Mechanism for Model Understanding}, author = {Nair, Rahul and Tommasi, Pierpaolo}, booktitle = {Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track}, pages = {273--277}, year = {2022}, editor = {Kiela, Douwe and Ciccone, Marco and Caputo, Barbara}, volume = {176}, series = {Proceedings of Machine Learning Research}, month = {06--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v176/nair22a/nair22a.pdf}, url = {https://proceedings.mlr.press/v176/nair22a.html}, abstract = {We demonstrate a system for prospective explanations of black box models for regression and classification tasks with structured data. Prospective explanations are aimed at showing how models function by highlighting likely changes in model outcomes under changes in input. This in contrast to most post-hoc explanability methods, that aim to provide a justification for a decision retrospectively. To do so, we employ a surrogate Bayesian network model and learn dependencies through a structure learning task. Our system is designed to provide fast estimates of changes in outcomes for any arbitrary exploratory query from users. Such queries are typical partial, i.e. involve only a selected number of features, the outcomes labels are shown therefore as likelihoods. Repeated queries can indicate which aspects of the feature space are more likely to influence the target variable. We demonstrate the system from a real-world application from the humanitarian sector and show the value of bayesian network surrogates.} }
Endnote
%0 Conference Paper %T Prospective Explanations: An Interactive Mechanism for Model Understanding %A Rahul Nair %A Pierpaolo Tommasi %B Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track %C Proceedings of Machine Learning Research %D 2022 %E Douwe Kiela %E Marco Ciccone %E Barbara Caputo %F pmlr-v176-nair22a %I PMLR %P 273--277 %U https://proceedings.mlr.press/v176/nair22a.html %V 176 %X We demonstrate a system for prospective explanations of black box models for regression and classification tasks with structured data. Prospective explanations are aimed at showing how models function by highlighting likely changes in model outcomes under changes in input. This in contrast to most post-hoc explanability methods, that aim to provide a justification for a decision retrospectively. To do so, we employ a surrogate Bayesian network model and learn dependencies through a structure learning task. Our system is designed to provide fast estimates of changes in outcomes for any arbitrary exploratory query from users. Such queries are typical partial, i.e. involve only a selected number of features, the outcomes labels are shown therefore as likelihoods. Repeated queries can indicate which aspects of the feature space are more likely to influence the target variable. We demonstrate the system from a real-world application from the humanitarian sector and show the value of bayesian network surrogates.
APA
Nair, R. & Tommasi, P.. (2022). Prospective Explanations: An Interactive Mechanism for Model Understanding. Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, in Proceedings of Machine Learning Research 176:273-277 Available from https://proceedings.mlr.press/v176/nair22a.html.

Related Material