Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations

Yanda Chen, Ruiqi Zhong, Narutatsu Ri, Chen Zhao, He He, Jacob Steinhardt, Zhou Yu, Kathleen Mckeown
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:7880-7904, 2024.

Abstract

Large language models (LLMs) are trained to imitate humans to explain human decisions. However, do LLMs explain themselves? Can they help humans build mental models of how LLMs process different inputs? To answer these questions, we propose to evaluate $\textbf{counterfactual simulatability}$ of natural language explanations: whether an explanation can enable humans to precisely infer the model’s outputs on diverse counterfactuals of the explained input. For example, if a model answers ”$\textit{yes}$” to the input question ”$\textit{Can eagles fly?}$” with the explanation ”$\textit{all birds can fly}$”, then humans would infer from the explanation that it would also answer ”$\textit{yes}$” to the counterfactual input ”$\textit{Can penguins fly?}$”. If the explanation is precise, then the model’s answer should match humans’ expectations. We implemented two metrics based on counterfactual simulatability: precision and generality. We generated diverse counterfactuals automatically using LLMs. We then used these metrics to evaluate state-of-the-art LLMs (e.g., GPT-4) on two tasks: multi-hop factual reasoning and reward modeling. We found that LLM’s explanations have low precision and that precision does not correlate with plausibility. Therefore, naively optimizing human approvals (e.g., RLHF) may be insufficient.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-chen24bl, title = {Do Models Explain Themselves? {C}ounterfactual Simulatability of Natural Language Explanations}, author = {Chen, Yanda and Zhong, Ruiqi and Ri, Narutatsu and Zhao, Chen and He, He and Steinhardt, Jacob and Yu, Zhou and Mckeown, Kathleen}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {7880--7904}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/chen24bl/chen24bl.pdf}, url = {https://proceedings.mlr.press/v235/chen24bl.html}, abstract = {Large language models (LLMs) are trained to imitate humans to explain human decisions. However, do LLMs explain themselves? Can they help humans build mental models of how LLMs process different inputs? To answer these questions, we propose to evaluate $\textbf{counterfactual simulatability}$ of natural language explanations: whether an explanation can enable humans to precisely infer the model’s outputs on diverse counterfactuals of the explained input. For example, if a model answers ”$\textit{yes}$” to the input question ”$\textit{Can eagles fly?}$” with the explanation ”$\textit{all birds can fly}$”, then humans would infer from the explanation that it would also answer ”$\textit{yes}$” to the counterfactual input ”$\textit{Can penguins fly?}$”. If the explanation is precise, then the model’s answer should match humans’ expectations. We implemented two metrics based on counterfactual simulatability: precision and generality. We generated diverse counterfactuals automatically using LLMs. We then used these metrics to evaluate state-of-the-art LLMs (e.g., GPT-4) on two tasks: multi-hop factual reasoning and reward modeling. We found that LLM’s explanations have low precision and that precision does not correlate with plausibility. Therefore, naively optimizing human approvals (e.g., RLHF) may be insufficient.} }
Endnote
%0 Conference Paper %T Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations %A Yanda Chen %A Ruiqi Zhong %A Narutatsu Ri %A Chen Zhao %A He He %A Jacob Steinhardt %A Zhou Yu %A Kathleen Mckeown %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-chen24bl %I PMLR %P 7880--7904 %U https://proceedings.mlr.press/v235/chen24bl.html %V 235 %X Large language models (LLMs) are trained to imitate humans to explain human decisions. However, do LLMs explain themselves? Can they help humans build mental models of how LLMs process different inputs? To answer these questions, we propose to evaluate $\textbf{counterfactual simulatability}$ of natural language explanations: whether an explanation can enable humans to precisely infer the model’s outputs on diverse counterfactuals of the explained input. For example, if a model answers ”$\textit{yes}$” to the input question ”$\textit{Can eagles fly?}$” with the explanation ”$\textit{all birds can fly}$”, then humans would infer from the explanation that it would also answer ”$\textit{yes}$” to the counterfactual input ”$\textit{Can penguins fly?}$”. If the explanation is precise, then the model’s answer should match humans’ expectations. We implemented two metrics based on counterfactual simulatability: precision and generality. We generated diverse counterfactuals automatically using LLMs. We then used these metrics to evaluate state-of-the-art LLMs (e.g., GPT-4) on two tasks: multi-hop factual reasoning and reward modeling. We found that LLM’s explanations have low precision and that precision does not correlate with plausibility. Therefore, naively optimizing human approvals (e.g., RLHF) may be insufficient.
APA
Chen, Y., Zhong, R., Ri, N., Zhao, C., He, H., Steinhardt, J., Yu, Z. & Mckeown, K.. (2024). Do Models Explain Themselves? Counterfactual Simulatability of Natural Language Explanations. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:7880-7904 Available from https://proceedings.mlr.press/v235/chen24bl.html.

Related Material