Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners?

Andreas Opedal, Alessandro Stolfo, Haruki Shirakami, Ying Jiao, Ryan Cotterell, Bernhard Schölkopf, Abulhair Saparov, Mrinmaya Sachan
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:38762-38778, 2024.

Abstract

There is increasing interest in employing large language models (LLMs) as cognitive models. For such purposes, it is central to understand which properties of human cognition are well-modeled by LLMs, and which are not. In this work, we study the biases of LLMs in relation to those known in children when solving arithmetic word problems. Surveying the learning science literature, we posit that the problem-solving process can be split into three distinct steps: text comprehension, solution planning and solution execution. We construct tests for each one in order to understand whether current LLMs display the same cognitive biases as children in these steps. We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features. We find evidence that LLMs, with and without instruction-tuning, exhibit human-like biases in both the text-comprehension and the solution-planning steps of the solving process, but not in the final step, in which the arithmetic expressions are executed to obtain the answer.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-opedal24a, title = {Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners?}, author = {Opedal, Andreas and Stolfo, Alessandro and Shirakami, Haruki and Jiao, Ying and Cotterell, Ryan and Sch\"{o}lkopf, Bernhard and Saparov, Abulhair and Sachan, Mrinmaya}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {38762--38778}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/opedal24a/opedal24a.pdf}, url = {https://proceedings.mlr.press/v235/opedal24a.html}, abstract = {There is increasing interest in employing large language models (LLMs) as cognitive models. For such purposes, it is central to understand which properties of human cognition are well-modeled by LLMs, and which are not. In this work, we study the biases of LLMs in relation to those known in children when solving arithmetic word problems. Surveying the learning science literature, we posit that the problem-solving process can be split into three distinct steps: text comprehension, solution planning and solution execution. We construct tests for each one in order to understand whether current LLMs display the same cognitive biases as children in these steps. We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features. We find evidence that LLMs, with and without instruction-tuning, exhibit human-like biases in both the text-comprehension and the solution-planning steps of the solving process, but not in the final step, in which the arithmetic expressions are executed to obtain the answer.} }
Endnote
%0 Conference Paper %T Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? %A Andreas Opedal %A Alessandro Stolfo %A Haruki Shirakami %A Ying Jiao %A Ryan Cotterell %A Bernhard Schölkopf %A Abulhair Saparov %A Mrinmaya Sachan %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-opedal24a %I PMLR %P 38762--38778 %U https://proceedings.mlr.press/v235/opedal24a.html %V 235 %X There is increasing interest in employing large language models (LLMs) as cognitive models. For such purposes, it is central to understand which properties of human cognition are well-modeled by LLMs, and which are not. In this work, we study the biases of LLMs in relation to those known in children when solving arithmetic word problems. Surveying the learning science literature, we posit that the problem-solving process can be split into three distinct steps: text comprehension, solution planning and solution execution. We construct tests for each one in order to understand whether current LLMs display the same cognitive biases as children in these steps. We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features. We find evidence that LLMs, with and without instruction-tuning, exhibit human-like biases in both the text-comprehension and the solution-planning steps of the solving process, but not in the final step, in which the arithmetic expressions are executed to obtain the answer.
APA
Opedal, A., Stolfo, A., Shirakami, H., Jiao, Y., Cotterell, R., Schölkopf, B., Saparov, A. & Sachan, M.. (2024). Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners?. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:38762-38778 Available from https://proceedings.mlr.press/v235/opedal24a.html.

Related Material