Language Models Represent Beliefs of Self and Others

Wentao Zhu, Zhining Zhang, Yizhou Wang
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:62638-62681, 2024.

Abstract

Understanding and attributing mental states, known as Theory of Mind (ToM), emerges as a fundamental capability for human social reasoning. While Large Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms underlying these capabilities remain elusive. In this study, we discover that it is possible to linearly decode the belief status from the perspectives of various agents through neural activations of language models, indicating the existence of internal representations of self and others’ beliefs. By manipulating these representations, we observe dramatic changes in the models’ ToM performance, underscoring their pivotal role in the social reasoning process. Additionally, our findings extend to diverse social reasoning tasks that involve different causal inference patterns, suggesting the potential generalizability of these representations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zhu24o, title = {Language Models Represent Beliefs of Self and Others}, author = {Zhu, Wentao and Zhang, Zhining and Wang, Yizhou}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {62638--62681}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zhu24o/zhu24o.pdf}, url = {https://proceedings.mlr.press/v235/zhu24o.html}, abstract = {Understanding and attributing mental states, known as Theory of Mind (ToM), emerges as a fundamental capability for human social reasoning. While Large Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms underlying these capabilities remain elusive. In this study, we discover that it is possible to linearly decode the belief status from the perspectives of various agents through neural activations of language models, indicating the existence of internal representations of self and others’ beliefs. By manipulating these representations, we observe dramatic changes in the models’ ToM performance, underscoring their pivotal role in the social reasoning process. Additionally, our findings extend to diverse social reasoning tasks that involve different causal inference patterns, suggesting the potential generalizability of these representations.} }
Endnote
%0 Conference Paper %T Language Models Represent Beliefs of Self and Others %A Wentao Zhu %A Zhining Zhang %A Yizhou Wang %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zhu24o %I PMLR %P 62638--62681 %U https://proceedings.mlr.press/v235/zhu24o.html %V 235 %X Understanding and attributing mental states, known as Theory of Mind (ToM), emerges as a fundamental capability for human social reasoning. While Large Language Models (LLMs) appear to possess certain ToM abilities, the mechanisms underlying these capabilities remain elusive. In this study, we discover that it is possible to linearly decode the belief status from the perspectives of various agents through neural activations of language models, indicating the existence of internal representations of self and others’ beliefs. By manipulating these representations, we observe dramatic changes in the models’ ToM performance, underscoring their pivotal role in the social reasoning process. Additionally, our findings extend to diverse social reasoning tasks that involve different causal inference patterns, suggesting the potential generalizability of these representations.
APA
Zhu, W., Zhang, Z. & Wang, Y.. (2024). Language Models Represent Beliefs of Self and Others. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:62638-62681 Available from https://proceedings.mlr.press/v235/zhu24o.html.

Related Material