Uncertainty Quantification for LLM-Based Survey Simulations

Chengpiao Huang, Yuhang Wu, Kaizheng Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:25947-25971, 2025.

Abstract

We investigate the use of large language models (LLMs) to simulate human responses to survey questions, and perform uncertainty quantification to gain reliable insights. Our approach converts imperfect, LLM-simulated responses into confidence sets for population parameters of human responses, addressing the distribution shift between the simulated and real populations. A key innovation lies in determining the optimal number of simulated responses: too many produce overly narrow confidence sets with poor coverage, while too few yield excessively loose estimates. To resolve this, our method adaptively selects the simulation sample size, ensuring valid average-case coverage guarantees. It is broadly applicable to any LLM, irrespective of its fidelity, and any procedure for constructing confidence sets. Additionally, the selected sample size quantifies the degree of misalignment between the LLM and the target human population. We illustrate our method on real datasets and LLMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-huang25am, title = {Uncertainty Quantification for {LLM}-Based Survey Simulations}, author = {Huang, Chengpiao and Wu, Yuhang and Wang, Kaizheng}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {25947--25971}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/huang25am/huang25am.pdf}, url = {https://proceedings.mlr.press/v267/huang25am.html}, abstract = {We investigate the use of large language models (LLMs) to simulate human responses to survey questions, and perform uncertainty quantification to gain reliable insights. Our approach converts imperfect, LLM-simulated responses into confidence sets for population parameters of human responses, addressing the distribution shift between the simulated and real populations. A key innovation lies in determining the optimal number of simulated responses: too many produce overly narrow confidence sets with poor coverage, while too few yield excessively loose estimates. To resolve this, our method adaptively selects the simulation sample size, ensuring valid average-case coverage guarantees. It is broadly applicable to any LLM, irrespective of its fidelity, and any procedure for constructing confidence sets. Additionally, the selected sample size quantifies the degree of misalignment between the LLM and the target human population. We illustrate our method on real datasets and LLMs.} }
Endnote
%0 Conference Paper %T Uncertainty Quantification for LLM-Based Survey Simulations %A Chengpiao Huang %A Yuhang Wu %A Kaizheng Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-huang25am %I PMLR %P 25947--25971 %U https://proceedings.mlr.press/v267/huang25am.html %V 267 %X We investigate the use of large language models (LLMs) to simulate human responses to survey questions, and perform uncertainty quantification to gain reliable insights. Our approach converts imperfect, LLM-simulated responses into confidence sets for population parameters of human responses, addressing the distribution shift between the simulated and real populations. A key innovation lies in determining the optimal number of simulated responses: too many produce overly narrow confidence sets with poor coverage, while too few yield excessively loose estimates. To resolve this, our method adaptively selects the simulation sample size, ensuring valid average-case coverage guarantees. It is broadly applicable to any LLM, irrespective of its fidelity, and any procedure for constructing confidence sets. Additionally, the selected sample size quantifies the degree of misalignment between the LLM and the target human population. We illustrate our method on real datasets and LLMs.
APA
Huang, C., Wu, Y. & Wang, K.. (2025). Uncertainty Quantification for LLM-Based Survey Simulations. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:25947-25971 Available from https://proceedings.mlr.press/v267/huang25am.html.

Related Material