Understanding Chain-of-Thought in LLMs through Information Theory

Jean-Francois Ton, Muhammad Faaiz Taufiq, Yang Liu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:59784-59811, 2025.

Abstract

Large Language Models (LLMs) have shown impressive performance in complex reasoning tasks through the use of Chain-of-Thought (CoT) reasoning, allowing models to break down problems into manageable sub-tasks. However, existing CoT evaluation techniques either require annotated CoT data or fall short of accurately assessing intermediate reasoning steps, leading to high rates of false positives. In this paper, we formalize CoT reasoning in LLMs through an information-theoretic lens. Specifically, our framework quantifies the ‘information gain’ at each reasoning step, enabling the identification of failure modes in LLMs without the need for expensive annotated datasets. We demonstrate the efficacy of our approach through extensive experiments on toy arithmetic, GSM8K and PRM800k datasets, where it significantly outperforms existing outcome-based methods by providing more accurate insights into model performance on individual tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ton25a, title = {Understanding Chain-of-Thought in {LLM}s through Information Theory}, author = {Ton, Jean-Francois and Taufiq, Muhammad Faaiz and Liu, Yang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {59784--59811}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ton25a/ton25a.pdf}, url = {https://proceedings.mlr.press/v267/ton25a.html}, abstract = {Large Language Models (LLMs) have shown impressive performance in complex reasoning tasks through the use of Chain-of-Thought (CoT) reasoning, allowing models to break down problems into manageable sub-tasks. However, existing CoT evaluation techniques either require annotated CoT data or fall short of accurately assessing intermediate reasoning steps, leading to high rates of false positives. In this paper, we formalize CoT reasoning in LLMs through an information-theoretic lens. Specifically, our framework quantifies the ‘information gain’ at each reasoning step, enabling the identification of failure modes in LLMs without the need for expensive annotated datasets. We demonstrate the efficacy of our approach through extensive experiments on toy arithmetic, GSM8K and PRM800k datasets, where it significantly outperforms existing outcome-based methods by providing more accurate insights into model performance on individual tasks.} }
Endnote
%0 Conference Paper %T Understanding Chain-of-Thought in LLMs through Information Theory %A Jean-Francois Ton %A Muhammad Faaiz Taufiq %A Yang Liu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ton25a %I PMLR %P 59784--59811 %U https://proceedings.mlr.press/v267/ton25a.html %V 267 %X Large Language Models (LLMs) have shown impressive performance in complex reasoning tasks through the use of Chain-of-Thought (CoT) reasoning, allowing models to break down problems into manageable sub-tasks. However, existing CoT evaluation techniques either require annotated CoT data or fall short of accurately assessing intermediate reasoning steps, leading to high rates of false positives. In this paper, we formalize CoT reasoning in LLMs through an information-theoretic lens. Specifically, our framework quantifies the ‘information gain’ at each reasoning step, enabling the identification of failure modes in LLMs without the need for expensive annotated datasets. We demonstrate the efficacy of our approach through extensive experiments on toy arithmetic, GSM8K and PRM800k datasets, where it significantly outperforms existing outcome-based methods by providing more accurate insights into model performance on individual tasks.
APA
Ton, J., Taufiq, M.F. & Liu, Y.. (2025). Understanding Chain-of-Thought in LLMs through Information Theory. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:59784-59811 Available from https://proceedings.mlr.press/v267/ton25a.html.

Related Material