Invariant Rationalization

Shiyu Chang, Yang Zhang, Mo Yu, Tommi Jaakkola
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1448-1458, 2020.

Abstract

Selective rationalization improves neural network interpretability by identifying a small subset of input features {—} the rationale {—} that best explains or supports the prediction. A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale. However, MMI can be problematic because it picks up spurious correlations between the input features and the output. Instead, we introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments. We show both theoretically and empirically that the proposed rationales can rule out spurious correlations and generalize better to different test scenarios. The resulting explanations also align better with human judgments. Our implementations are publicly available at https://github.com/code-terminator/invariant_rationalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-chang20c, title = {Invariant Rationalization}, author = {Chang, Shiyu and Zhang, Yang and Yu, Mo and Jaakkola, Tommi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1448--1458}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/chang20c/chang20c.pdf}, url = { http://proceedings.mlr.press/v119/chang20c.html }, abstract = {Selective rationalization improves neural network interpretability by identifying a small subset of input features {—} the rationale {—} that best explains or supports the prediction. A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale. However, MMI can be problematic because it picks up spurious correlations between the input features and the output. Instead, we introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments. We show both theoretically and empirically that the proposed rationales can rule out spurious correlations and generalize better to different test scenarios. The resulting explanations also align better with human judgments. Our implementations are publicly available at https://github.com/code-terminator/invariant_rationalization.} }
Endnote
%0 Conference Paper %T Invariant Rationalization %A Shiyu Chang %A Yang Zhang %A Mo Yu %A Tommi Jaakkola %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-chang20c %I PMLR %P 1448--1458 %U http://proceedings.mlr.press/v119/chang20c.html %V 119 %X Selective rationalization improves neural network interpretability by identifying a small subset of input features {—} the rationale {—} that best explains or supports the prediction. A typical rationalization criterion, i.e. maximum mutual information (MMI), finds the rationale that maximizes the prediction performance based only on the rationale. However, MMI can be problematic because it picks up spurious correlations between the input features and the output. Instead, we introduce a game-theoretic invariant rationalization criterion where the rationales are constrained to enable the same predictor to be optimal across different environments. We show both theoretically and empirically that the proposed rationales can rule out spurious correlations and generalize better to different test scenarios. The resulting explanations also align better with human judgments. Our implementations are publicly available at https://github.com/code-terminator/invariant_rationalization.
APA
Chang, S., Zhang, Y., Yu, M. & Jaakkola, T.. (2020). Invariant Rationalization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1448-1458 Available from http://proceedings.mlr.press/v119/chang20c.html .

Related Material