A Shapley-value Guided Rationale Editor for Rationale Learning

Zixin Kuang, Meng-Fen Chiang, Wang-Chien Lee
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4771-4779, 2025.

Abstract

Rationale learning aims to automatically uncover the underlying explanations for NLP predictions. Previous studies in rationale learning mainly focus on the relevance of independent tokens with the predictions without considering their marginal contribution and the collective readability of extracted rationales. Through an empirical analysis, we argue that the sufficiency, informativeness, and readability of rationales are essential for explaining diverse end-task predictions. Accordingly, we propose Shapley-value Guided Rationale Editor (SHARE), an unsupervised approach that refines editable rationales while predicting task outcomes. SHARE extracts a sequence of tokens as a rationale, providing a collective explanation that is sufficient, informative, and readable. SHARE is highly adaptable for tasks like sentiment analysis, claim verification, and question answering, and can integrate seamlessly with various language models to provide explainability. Extensive experiments demonstrate its effectiveness in balancing sufficiency, informativeness, and readability across diverse applications. Our code and datasets are available at \url{https://github.com/zixinK/SHARE.}

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-kuang25a, title = {A Shapley-value Guided Rationale Editor for Rationale Learning}, author = {Kuang, Zixin and Chiang, Meng-Fen and Lee, Wang-Chien}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4771--4779}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/kuang25a/kuang25a.pdf}, url = {https://proceedings.mlr.press/v258/kuang25a.html}, abstract = {Rationale learning aims to automatically uncover the underlying explanations for NLP predictions. Previous studies in rationale learning mainly focus on the relevance of independent tokens with the predictions without considering their marginal contribution and the collective readability of extracted rationales. Through an empirical analysis, we argue that the sufficiency, informativeness, and readability of rationales are essential for explaining diverse end-task predictions. Accordingly, we propose Shapley-value Guided Rationale Editor (SHARE), an unsupervised approach that refines editable rationales while predicting task outcomes. SHARE extracts a sequence of tokens as a rationale, providing a collective explanation that is sufficient, informative, and readable. SHARE is highly adaptable for tasks like sentiment analysis, claim verification, and question answering, and can integrate seamlessly with various language models to provide explainability. Extensive experiments demonstrate its effectiveness in balancing sufficiency, informativeness, and readability across diverse applications. Our code and datasets are available at \url{https://github.com/zixinK/SHARE.}} }
Endnote
%0 Conference Paper %T A Shapley-value Guided Rationale Editor for Rationale Learning %A Zixin Kuang %A Meng-Fen Chiang %A Wang-Chien Lee %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-kuang25a %I PMLR %P 4771--4779 %U https://proceedings.mlr.press/v258/kuang25a.html %V 258 %X Rationale learning aims to automatically uncover the underlying explanations for NLP predictions. Previous studies in rationale learning mainly focus on the relevance of independent tokens with the predictions without considering their marginal contribution and the collective readability of extracted rationales. Through an empirical analysis, we argue that the sufficiency, informativeness, and readability of rationales are essential for explaining diverse end-task predictions. Accordingly, we propose Shapley-value Guided Rationale Editor (SHARE), an unsupervised approach that refines editable rationales while predicting task outcomes. SHARE extracts a sequence of tokens as a rationale, providing a collective explanation that is sufficient, informative, and readable. SHARE is highly adaptable for tasks like sentiment analysis, claim verification, and question answering, and can integrate seamlessly with various language models to provide explainability. Extensive experiments demonstrate its effectiveness in balancing sufficiency, informativeness, and readability across diverse applications. Our code and datasets are available at \url{https://github.com/zixinK/SHARE.}
APA
Kuang, Z., Chiang, M. & Lee, W.. (2025). A Shapley-value Guided Rationale Editor for Rationale Learning. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4771-4779 Available from https://proceedings.mlr.press/v258/kuang25a.html.

Related Material