Confident Feature Ranking

Bitya Neuhof, Yuval Benjamini
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1468-1476, 2024.

Abstract

Machine learning models are widely applied in various fields. Stakeholders often use post-hoc feature importance methods to better understand the input features’ contribution to the models’ predictions. The interpretation of the importance values provided by these methods is frequently based on the relative order of the features (their ranking) rather than the importance values themselves. Since the order may be unstable, we present a framework for quantifying the uncertainty in global importance values. We propose a novel method for the post-hoc interpretation of feature importance values that is based on the framework and pairwise comparisons of the feature importance values. This method produces simultaneous confidence intervals for the features’ ranks, which include the “true” (infinite sample) ranks with high probability, and enables the selection of the set of top-k important features.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-neuhof24a, title = { Confident Feature Ranking }, author = {Neuhof, Bitya and Benjamini, Yuval}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1468--1476}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/neuhof24a/neuhof24a.pdf}, url = {https://proceedings.mlr.press/v238/neuhof24a.html}, abstract = { Machine learning models are widely applied in various fields. Stakeholders often use post-hoc feature importance methods to better understand the input features’ contribution to the models’ predictions. The interpretation of the importance values provided by these methods is frequently based on the relative order of the features (their ranking) rather than the importance values themselves. Since the order may be unstable, we present a framework for quantifying the uncertainty in global importance values. We propose a novel method for the post-hoc interpretation of feature importance values that is based on the framework and pairwise comparisons of the feature importance values. This method produces simultaneous confidence intervals for the features’ ranks, which include the “true” (infinite sample) ranks with high probability, and enables the selection of the set of top-k important features. } }
Endnote
%0 Conference Paper %T Confident Feature Ranking %A Bitya Neuhof %A Yuval Benjamini %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-neuhof24a %I PMLR %P 1468--1476 %U https://proceedings.mlr.press/v238/neuhof24a.html %V 238 %X Machine learning models are widely applied in various fields. Stakeholders often use post-hoc feature importance methods to better understand the input features’ contribution to the models’ predictions. The interpretation of the importance values provided by these methods is frequently based on the relative order of the features (their ranking) rather than the importance values themselves. Since the order may be unstable, we present a framework for quantifying the uncertainty in global importance values. We propose a novel method for the post-hoc interpretation of feature importance values that is based on the framework and pairwise comparisons of the feature importance values. This method produces simultaneous confidence intervals for the features’ ranks, which include the “true” (infinite sample) ranks with high probability, and enables the selection of the set of top-k important features.
APA
Neuhof, B. & Benjamini, Y.. (2024). Confident Feature Ranking . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1468-1476 Available from https://proceedings.mlr.press/v238/neuhof24a.html.

Related Material