PWSHAP: A Path-Wise Explanation Model for Targeted Variables

Lucile Ter-Minassian, Oscar Clivio, Karla Diazordaz, Robin J. Evans, Christopher C. Holmes
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:34054-34089, 2023.

Abstract

Predictive black-box models can exhibit high-accuracy but their opaque nature hinders their uptake in safety-critical deployment environments. Explanation methods (XAI) can provide confidence for decision-making through increased transparency. However, existing XAI methods are not tailored towards models in sensitive domains where one predictor is of special interest, such as a treatment effect in a clinical model, or ethnicity in policy models. We introduce Path-Wise Shapley effects (PWSHAP), a framework for assessing the targeted effect of a binary (e.g. treatment) variable from a complex outcome model. Our approach augments the predictive model with a user-defined directed acyclic graph (DAG). The method then uses the graph alongside on-manifold Shapley values to identify effects along causal pathways whilst maintaining robustness to adversarial attacks. We establish error bounds for the identified path-wise Shapley effects and for Shapley values. We show PWSHAP can perform local bias and mediation analyses with faithfulness to the model. Further, if the targeted variable is randomised we can quantify local effect modification. We demonstrate the resolution, interpretability and true locality of our approach on examples and a real-world experiment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ter-minassian23a, title = {{PWSHAP}: A Path-Wise Explanation Model for Targeted Variables}, author = {Ter-Minassian, Lucile and Clivio, Oscar and Diazordaz, Karla and Evans, Robin J. and Holmes, Christopher C.}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {34054--34089}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ter-minassian23a/ter-minassian23a.pdf}, url = {https://proceedings.mlr.press/v202/ter-minassian23a.html}, abstract = {Predictive black-box models can exhibit high-accuracy but their opaque nature hinders their uptake in safety-critical deployment environments. Explanation methods (XAI) can provide confidence for decision-making through increased transparency. However, existing XAI methods are not tailored towards models in sensitive domains where one predictor is of special interest, such as a treatment effect in a clinical model, or ethnicity in policy models. We introduce Path-Wise Shapley effects (PWSHAP), a framework for assessing the targeted effect of a binary (e.g. treatment) variable from a complex outcome model. Our approach augments the predictive model with a user-defined directed acyclic graph (DAG). The method then uses the graph alongside on-manifold Shapley values to identify effects along causal pathways whilst maintaining robustness to adversarial attacks. We establish error bounds for the identified path-wise Shapley effects and for Shapley values. We show PWSHAP can perform local bias and mediation analyses with faithfulness to the model. Further, if the targeted variable is randomised we can quantify local effect modification. We demonstrate the resolution, interpretability and true locality of our approach on examples and a real-world experiment.} }
Endnote
%0 Conference Paper %T PWSHAP: A Path-Wise Explanation Model for Targeted Variables %A Lucile Ter-Minassian %A Oscar Clivio %A Karla Diazordaz %A Robin J. Evans %A Christopher C. Holmes %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ter-minassian23a %I PMLR %P 34054--34089 %U https://proceedings.mlr.press/v202/ter-minassian23a.html %V 202 %X Predictive black-box models can exhibit high-accuracy but their opaque nature hinders their uptake in safety-critical deployment environments. Explanation methods (XAI) can provide confidence for decision-making through increased transparency. However, existing XAI methods are not tailored towards models in sensitive domains where one predictor is of special interest, such as a treatment effect in a clinical model, or ethnicity in policy models. We introduce Path-Wise Shapley effects (PWSHAP), a framework for assessing the targeted effect of a binary (e.g. treatment) variable from a complex outcome model. Our approach augments the predictive model with a user-defined directed acyclic graph (DAG). The method then uses the graph alongside on-manifold Shapley values to identify effects along causal pathways whilst maintaining robustness to adversarial attacks. We establish error bounds for the identified path-wise Shapley effects and for Shapley values. We show PWSHAP can perform local bias and mediation analyses with faithfulness to the model. Further, if the targeted variable is randomised we can quantify local effect modification. We demonstrate the resolution, interpretability and true locality of our approach on examples and a real-world experiment.
APA
Ter-Minassian, L., Clivio, O., Diazordaz, K., Evans, R.J. & Holmes, C.C.. (2023). PWSHAP: A Path-Wise Explanation Model for Targeted Variables. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:34054-34089 Available from https://proceedings.mlr.press/v202/ter-minassian23a.html.

Related Material