Position: LLM Social Simulations Are a Promising Research Method

Jacy Reese Anthis, Ryan Liu, Sean M Richardson, Austin C. Kozlowski, Bernard Koch, Erik Brynjolfsson, James Evans, Michael S. Bernstein
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:81005-81034, 2025.

Abstract

Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted this method. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a review of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions, including context-rich prompting and fine-tuning with social science datasets. We believe that LLM social simulations can already be used for pilot and exploratory studies, and more widespread use may soon be possible with rapidly advancing LLM capabilities. Researchers should prioritize developing conceptual models and iterative evaluations to make the best use of new AI systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-anthis25a, title = {Position: {LLM} Social Simulations Are a Promising Research Method}, author = {Anthis, Jacy Reese and Liu, Ryan and Richardson, Sean M and Kozlowski, Austin C. and Koch, Bernard and Brynjolfsson, Erik and Evans, James and Bernstein, Michael S.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {81005--81034}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/anthis25a/anthis25a.pdf}, url = {https://proceedings.mlr.press/v267/anthis25a.html}, abstract = {Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted this method. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a review of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions, including context-rich prompting and fine-tuning with social science datasets. We believe that LLM social simulations can already be used for pilot and exploratory studies, and more widespread use may soon be possible with rapidly advancing LLM capabilities. Researchers should prioritize developing conceptual models and iterative evaluations to make the best use of new AI systems.} }
Endnote
%0 Conference Paper %T Position: LLM Social Simulations Are a Promising Research Method %A Jacy Reese Anthis %A Ryan Liu %A Sean M Richardson %A Austin C. Kozlowski %A Bernard Koch %A Erik Brynjolfsson %A James Evans %A Michael S. Bernstein %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-anthis25a %I PMLR %P 81005--81034 %U https://proceedings.mlr.press/v267/anthis25a.html %V 267 %X Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted this method. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a review of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions, including context-rich prompting and fine-tuning with social science datasets. We believe that LLM social simulations can already be used for pilot and exploratory studies, and more widespread use may soon be possible with rapidly advancing LLM capabilities. Researchers should prioritize developing conceptual models and iterative evaluations to make the best use of new AI systems.
APA
Anthis, J.R., Liu, R., Richardson, S.M., Kozlowski, A.C., Koch, B., Brynjolfsson, E., Evans, J. & Bernstein, M.S.. (2025). Position: LLM Social Simulations Are a Promising Research Method. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:81005-81034 Available from https://proceedings.mlr.press/v267/anthis25a.html.

Related Material