The Consistency Hypothesis in Uncertainty Quantification for Large Language Models

Quan Xiao, Debarun Bhattacharjya, Balaji Ganesan, Radu Marinescu, Katya Mirylenka, Nhan H Pham, Michael Glass, Junkyu Lee
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:4636-4651, 2025.

Abstract

Estimating the confidence of large language model (LLM) outputs is essential for real-world applications requiring high user trust. Black-box uncertainty quantification (UQ) methods, relying solely on model API access, have gained popularity due to their practical benefits. In this paper, we examine the implicit assumption behind several UQ methods, which use generation consistency as a proxy for confidence-an idea we formalize as the consistency hypothesis. We introduce three mathematical statements with corresponding statistical tests to capture variations of this hypothesis and metrics to evaluate LLM output conformity across tasks. Our empirical investigation, spanning 8 benchmark datasets and 3 tasks (question answering, text summarization, and text-to-SQL), highlights the prevalence of the hypothesis under different settings. Among the statements, we highlight the ‘Sim-Any’ hypothesis as the most actionable, and demonstrate how it can be leveraged by proposing data-free black-box UQ methods that aggregate similarities between generations for confidence estimation. These approaches can outperform the closest baselines, showcasing the practical value of the empirically observed consistency hypothesis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-xiao25a, title = {The Consistency Hypothesis in Uncertainty Quantification for Large Language Models}, author = {Xiao, Quan and Bhattacharjya, Debarun and Ganesan, Balaji and Marinescu, Radu and Mirylenka, Katya and Pham, Nhan H and Glass, Michael and Lee, Junkyu}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {4636--4651}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/xiao25a/xiao25a.pdf}, url = {https://proceedings.mlr.press/v286/xiao25a.html}, abstract = {Estimating the confidence of large language model (LLM) outputs is essential for real-world applications requiring high user trust. Black-box uncertainty quantification (UQ) methods, relying solely on model API access, have gained popularity due to their practical benefits. In this paper, we examine the implicit assumption behind several UQ methods, which use generation consistency as a proxy for confidence-an idea we formalize as the consistency hypothesis. We introduce three mathematical statements with corresponding statistical tests to capture variations of this hypothesis and metrics to evaluate LLM output conformity across tasks. Our empirical investigation, spanning 8 benchmark datasets and 3 tasks (question answering, text summarization, and text-to-SQL), highlights the prevalence of the hypothesis under different settings. Among the statements, we highlight the ‘Sim-Any’ hypothesis as the most actionable, and demonstrate how it can be leveraged by proposing data-free black-box UQ methods that aggregate similarities between generations for confidence estimation. These approaches can outperform the closest baselines, showcasing the practical value of the empirically observed consistency hypothesis.} }
Endnote
%0 Conference Paper %T The Consistency Hypothesis in Uncertainty Quantification for Large Language Models %A Quan Xiao %A Debarun Bhattacharjya %A Balaji Ganesan %A Radu Marinescu %A Katya Mirylenka %A Nhan H Pham %A Michael Glass %A Junkyu Lee %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-xiao25a %I PMLR %P 4636--4651 %U https://proceedings.mlr.press/v286/xiao25a.html %V 286 %X Estimating the confidence of large language model (LLM) outputs is essential for real-world applications requiring high user trust. Black-box uncertainty quantification (UQ) methods, relying solely on model API access, have gained popularity due to their practical benefits. In this paper, we examine the implicit assumption behind several UQ methods, which use generation consistency as a proxy for confidence-an idea we formalize as the consistency hypothesis. We introduce three mathematical statements with corresponding statistical tests to capture variations of this hypothesis and metrics to evaluate LLM output conformity across tasks. Our empirical investigation, spanning 8 benchmark datasets and 3 tasks (question answering, text summarization, and text-to-SQL), highlights the prevalence of the hypothesis under different settings. Among the statements, we highlight the ‘Sim-Any’ hypothesis as the most actionable, and demonstrate how it can be leveraged by proposing data-free black-box UQ methods that aggregate similarities between generations for confidence estimation. These approaches can outperform the closest baselines, showcasing the practical value of the empirically observed consistency hypothesis.
APA
Xiao, Q., Bhattacharjya, D., Ganesan, B., Marinescu, R., Mirylenka, K., Pham, N.H., Glass, M. & Lee, J.. (2025). The Consistency Hypothesis in Uncertainty Quantification for Large Language Models. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:4636-4651 Available from https://proceedings.mlr.press/v286/xiao25a.html.

Related Material