Sparse and Faithful Explanations Without Sparse Models

Yiyang Sun, Zhi Chen, Vittorio Orlandi, Tong Wang, Cynthia Rudin
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2071-2079, 2024.

Abstract

Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models – even if they are not sparse – actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. Our algorithms reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-sun24b, title = { Sparse and Faithful Explanations Without Sparse Models }, author = {Sun, Yiyang and Chen, Zhi and Orlandi, Vittorio and Wang, Tong and Rudin, Cynthia}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2071--2079}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/sun24b/sun24b.pdf}, url = {https://proceedings.mlr.press/v238/sun24b.html}, abstract = { Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models – even if they are not sparse – actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. Our algorithms reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models. } }
Endnote
%0 Conference Paper %T Sparse and Faithful Explanations Without Sparse Models %A Yiyang Sun %A Zhi Chen %A Vittorio Orlandi %A Tong Wang %A Cynthia Rudin %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-sun24b %I PMLR %P 2071--2079 %U https://proceedings.mlr.press/v238/sun24b.html %V 238 %X Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models – even if they are not sparse – actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. Our algorithms reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models.
APA
Sun, Y., Chen, Z., Orlandi, V., Wang, T. & Rudin, C.. (2024). Sparse and Faithful Explanations Without Sparse Models . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2071-2079 Available from https://proceedings.mlr.press/v238/sun24b.html.

Related Material