From predictions to confidence intervals: an empirical study of conformal prediction methods for in-context learning

Zhe Huang, Simone Rossi, Rui Yuan, Thomas Hannagan
Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference, PMLR 289:67-90, 2025.

Abstract

Transformers have become a standard architecture in machine learning, demonstrating strong in-context learning (ICL) abilities that allow them to learn from the prompt at inference time. However, uncertainty quantification for ICL remains an open challenge, particularly in noisy regression tasks. This paper investigates whether ICL can be leveraged for distribution-free uncertainty estimation, proposing a method based on conformal prediction to construct prediction intervals with guaranteed coverage. While traditional conformal methods are computationally expensive due to repeated model fitting, we exploit ICL to efficiently generate confidence intervals in a single forward pass. Our empirical analysis compares this approach against ridge regression-based conformal methods, showing that conformal prediction with in-context learning (CP with ICL) achieves robust and scalable uncertainty estimates. Additionally, we evaluate its performance under distribution shifts and establish scaling laws to guide model training. These findings bridge ICL and conformal prediction, providing a theoretically grounded and new framework for uncertainty quantification in transformer-based models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v289-huang25a, title = {From predictions to confidence intervals: an empirical study of conformal prediction methods for in-context learning}, author = {Huang, Zhe and Rossi, Simone and Yuan, Rui and Hannagan, Thomas}, booktitle = {Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference}, pages = {67--90}, year = {2025}, editor = {Allingham, James Urquhart and Swaroop, Siddharth}, volume = {289}, series = {Proceedings of Machine Learning Research}, month = {29 Apr}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v289/main/assets/huang25a/huang25a.pdf}, url = {https://proceedings.mlr.press/v289/huang25a.html}, abstract = {Transformers have become a standard architecture in machine learning, demonstrating strong in-context learning (ICL) abilities that allow them to learn from the prompt at inference time. However, uncertainty quantification for ICL remains an open challenge, particularly in noisy regression tasks. This paper investigates whether ICL can be leveraged for distribution-free uncertainty estimation, proposing a method based on conformal prediction to construct prediction intervals with guaranteed coverage. While traditional conformal methods are computationally expensive due to repeated model fitting, we exploit ICL to efficiently generate confidence intervals in a single forward pass. Our empirical analysis compares this approach against ridge regression-based conformal methods, showing that conformal prediction with in-context learning (CP with ICL) achieves robust and scalable uncertainty estimates. Additionally, we evaluate its performance under distribution shifts and establish scaling laws to guide model training. These findings bridge ICL and conformal prediction, providing a theoretically grounded and new framework for uncertainty quantification in transformer-based models.} }
Endnote
%0 Conference Paper %T From predictions to confidence intervals: an empirical study of conformal prediction methods for in-context learning %A Zhe Huang %A Simone Rossi %A Rui Yuan %A Thomas Hannagan %B Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference %C Proceedings of Machine Learning Research %D 2025 %E James Urquhart Allingham %E Siddharth Swaroop %F pmlr-v289-huang25a %I PMLR %P 67--90 %U https://proceedings.mlr.press/v289/huang25a.html %V 289 %X Transformers have become a standard architecture in machine learning, demonstrating strong in-context learning (ICL) abilities that allow them to learn from the prompt at inference time. However, uncertainty quantification for ICL remains an open challenge, particularly in noisy regression tasks. This paper investigates whether ICL can be leveraged for distribution-free uncertainty estimation, proposing a method based on conformal prediction to construct prediction intervals with guaranteed coverage. While traditional conformal methods are computationally expensive due to repeated model fitting, we exploit ICL to efficiently generate confidence intervals in a single forward pass. Our empirical analysis compares this approach against ridge regression-based conformal methods, showing that conformal prediction with in-context learning (CP with ICL) achieves robust and scalable uncertainty estimates. Additionally, we evaluate its performance under distribution shifts and establish scaling laws to guide model training. These findings bridge ICL and conformal prediction, providing a theoretically grounded and new framework for uncertainty quantification in transformer-based models.
APA
Huang, Z., Rossi, S., Yuan, R. & Hannagan, T.. (2025). From predictions to confidence intervals: an empirical study of conformal prediction methods for in-context learning. Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference, in Proceedings of Machine Learning Research 289:67-90 Available from https://proceedings.mlr.press/v289/huang25a.html.

Related Material