In-Context Learning Agents Are Asymmetric Belief Updaters

Johannes A. Schubert, Akshay Kumar Jagadish, Marcel Binz, Eric Schulz
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43928-43946, 2024.

Abstract

We study the in-context learning dynamics of large language models (LLMs) using three instrumental learning tasks adapted from cognitive psychology. We find that LLMs update their beliefs in an asymmetric manner and learn more from better-than-expected outcomes than from worse-than-expected ones. Furthermore, we show that this effect reverses when learning about counterfactual feedback and disappears when no agency is implied. We corroborate these findings by investigating idealized in-context learning agents derived through meta-reinforcement learning, where we observe similar patterns. Taken together, our results contribute to our understanding of how in-context learning works by highlighting that the framing of a problem significantly influences how learning occurs, a phenomenon also observed in human cognition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-schubert24a, title = {In-Context Learning Agents Are Asymmetric Belief Updaters}, author = {Schubert, Johannes A. and Jagadish, Akshay Kumar and Binz, Marcel and Schulz, Eric}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43928--43946}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/schubert24a/schubert24a.pdf}, url = {https://proceedings.mlr.press/v235/schubert24a.html}, abstract = {We study the in-context learning dynamics of large language models (LLMs) using three instrumental learning tasks adapted from cognitive psychology. We find that LLMs update their beliefs in an asymmetric manner and learn more from better-than-expected outcomes than from worse-than-expected ones. Furthermore, we show that this effect reverses when learning about counterfactual feedback and disappears when no agency is implied. We corroborate these findings by investigating idealized in-context learning agents derived through meta-reinforcement learning, where we observe similar patterns. Taken together, our results contribute to our understanding of how in-context learning works by highlighting that the framing of a problem significantly influences how learning occurs, a phenomenon also observed in human cognition.} }
Endnote
%0 Conference Paper %T In-Context Learning Agents Are Asymmetric Belief Updaters %A Johannes A. Schubert %A Akshay Kumar Jagadish %A Marcel Binz %A Eric Schulz %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-schubert24a %I PMLR %P 43928--43946 %U https://proceedings.mlr.press/v235/schubert24a.html %V 235 %X We study the in-context learning dynamics of large language models (LLMs) using three instrumental learning tasks adapted from cognitive psychology. We find that LLMs update their beliefs in an asymmetric manner and learn more from better-than-expected outcomes than from worse-than-expected ones. Furthermore, we show that this effect reverses when learning about counterfactual feedback and disappears when no agency is implied. We corroborate these findings by investigating idealized in-context learning agents derived through meta-reinforcement learning, where we observe similar patterns. Taken together, our results contribute to our understanding of how in-context learning works by highlighting that the framing of a problem significantly influences how learning occurs, a phenomenon also observed in human cognition.
APA
Schubert, J.A., Jagadish, A.K., Binz, M. & Schulz, E.. (2024). In-Context Learning Agents Are Asymmetric Belief Updaters. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43928-43946 Available from https://proceedings.mlr.press/v235/schubert24a.html.

Related Material