TraCE: Trajectory Counterfactual Explanation Scores

Jeffrey Nicholas Clark, Edward Alexander Small, Nawid Keshtmand, Michelle Wing Lam Wan, Elena Fillola Mayoral, Enrico Werner, Christopher Bourdeaux, Raul Santos-Rodriguez
Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL}), PMLR 233:36-45, 2024.

Abstract

Counterfactual explanations, and their associated algorithmic recourse, are typically leveraged to understand and explain predictions of individual instances coming from a black-box classifier. In this paper, we propose to extend the use of counterfactuals to evaluate progress in sequential decision making tasks. To this end, we introduce a model-agnostic modular framework, TraCE (Trajectory Counterfactual Explanation) scores, to distill and condense progress in highly complex scenarios into a single value. We demonstrate TraCE’s utility by showcasing its main properties in two case studies spanning healthcare and climate change.

Cite this Paper


BibTeX
@InProceedings{pmlr-v233-clark24a, title = {Tra{CE}: Trajectory Counterfactual Explanation Scores}, author = {Clark, Jeffrey Nicholas and Small, Edward Alexander and Keshtmand, Nawid and Wan, Michelle Wing Lam and Mayoral, Elena Fillola and Werner, Enrico and Bourdeaux, Christopher and Santos-Rodriguez, Raul}, booktitle = {Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL})}, pages = {36--45}, year = {2024}, editor = {Lutchyn, Tetiana and Ramírez Rivera, Adín and Ricaud, Benjamin}, volume = {233}, series = {Proceedings of Machine Learning Research}, month = {09--11 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v233/clark24a/clark24a.pdf}, url = {https://proceedings.mlr.press/v233/clark24a.html}, abstract = {Counterfactual explanations, and their associated algorithmic recourse, are typically leveraged to understand and explain predictions of individual instances coming from a black-box classifier. In this paper, we propose to extend the use of counterfactuals to evaluate progress in sequential decision making tasks. To this end, we introduce a model-agnostic modular framework, TraCE (Trajectory Counterfactual Explanation) scores, to distill and condense progress in highly complex scenarios into a single value. We demonstrate TraCE’s utility by showcasing its main properties in two case studies spanning healthcare and climate change.} }
Endnote
%0 Conference Paper %T TraCE: Trajectory Counterfactual Explanation Scores %A Jeffrey Nicholas Clark %A Edward Alexander Small %A Nawid Keshtmand %A Michelle Wing Lam Wan %A Elena Fillola Mayoral %A Enrico Werner %A Christopher Bourdeaux %A Raul Santos-Rodriguez %B Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL}) %C Proceedings of Machine Learning Research %D 2024 %E Tetiana Lutchyn %E Adín Ramírez Rivera %E Benjamin Ricaud %F pmlr-v233-clark24a %I PMLR %P 36--45 %U https://proceedings.mlr.press/v233/clark24a.html %V 233 %X Counterfactual explanations, and their associated algorithmic recourse, are typically leveraged to understand and explain predictions of individual instances coming from a black-box classifier. In this paper, we propose to extend the use of counterfactuals to evaluate progress in sequential decision making tasks. To this end, we introduce a model-agnostic modular framework, TraCE (Trajectory Counterfactual Explanation) scores, to distill and condense progress in highly complex scenarios into a single value. We demonstrate TraCE’s utility by showcasing its main properties in two case studies spanning healthcare and climate change.
APA
Clark, J.N., Small, E.A., Keshtmand, N., Wan, M.W.L., Mayoral, E.F., Werner, E., Bourdeaux, C. & Santos-Rodriguez, R.. (2024). TraCE: Trajectory Counterfactual Explanation Scores. Proceedings of the 5th Northern Lights Deep Learning Conference ({NLDL}), in Proceedings of Machine Learning Research 233:36-45 Available from https://proceedings.mlr.press/v233/clark24a.html.

Related Material