InnerThoughts: Disentangling Representations and Predictions in Large Language Models

Didier Chételat, Joseph Cotnareanu, Rylee Thompson, Yingxue Zhang, Mark Coates
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3862-3870, 2025.

Abstract

Large language models (LLMs) contain substantial factual knowledge which is commonly elicited by multiple-choice question-answering prompts. Internally, such models process the prompt through multiple transformer layers, building varying representations of the problem within its hidden states. Ultimately, however, only the hidden state corresponding to the final layer and token position is used to predict the answer label. In this work, we propose instead to learn a small separate neural network predictor module on a collection of training questions, that take the hidden states from all the layers at the last temporal position as input and outputs predictions. In effect, such a framework disentangles the representational abilities of LLMs from their predictive abilities. On a collection of hard benchmarks, our method achieves considerable improvements in performance, sometimes comparable to supervised fine-tuning procedures, but at a fraction of the computational cost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-chetelat25a, title = {InnerThoughts: Disentangling Representations and Predictions in Large Language Models}, author = {Ch{\'e}telat, Didier and Cotnareanu, Joseph and Thompson, Rylee and Zhang, Yingxue and Coates, Mark}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3862--3870}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/chetelat25a/chetelat25a.pdf}, url = {https://proceedings.mlr.press/v258/chetelat25a.html}, abstract = {Large language models (LLMs) contain substantial factual knowledge which is commonly elicited by multiple-choice question-answering prompts. Internally, such models process the prompt through multiple transformer layers, building varying representations of the problem within its hidden states. Ultimately, however, only the hidden state corresponding to the final layer and token position is used to predict the answer label. In this work, we propose instead to learn a small separate neural network predictor module on a collection of training questions, that take the hidden states from all the layers at the last temporal position as input and outputs predictions. In effect, such a framework disentangles the representational abilities of LLMs from their predictive abilities. On a collection of hard benchmarks, our method achieves considerable improvements in performance, sometimes comparable to supervised fine-tuning procedures, but at a fraction of the computational cost.} }
Endnote
%0 Conference Paper %T InnerThoughts: Disentangling Representations and Predictions in Large Language Models %A Didier Chételat %A Joseph Cotnareanu %A Rylee Thompson %A Yingxue Zhang %A Mark Coates %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-chetelat25a %I PMLR %P 3862--3870 %U https://proceedings.mlr.press/v258/chetelat25a.html %V 258 %X Large language models (LLMs) contain substantial factual knowledge which is commonly elicited by multiple-choice question-answering prompts. Internally, such models process the prompt through multiple transformer layers, building varying representations of the problem within its hidden states. Ultimately, however, only the hidden state corresponding to the final layer and token position is used to predict the answer label. In this work, we propose instead to learn a small separate neural network predictor module on a collection of training questions, that take the hidden states from all the layers at the last temporal position as input and outputs predictions. In effect, such a framework disentangles the representational abilities of LLMs from their predictive abilities. On a collection of hard benchmarks, our method achieves considerable improvements in performance, sometimes comparable to supervised fine-tuning procedures, but at a fraction of the computational cost.
APA
Chételat, D., Cotnareanu, J., Thompson, R., Zhang, Y. & Coates, M.. (2025). InnerThoughts: Disentangling Representations and Predictions in Large Language Models. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3862-3870 Available from https://proceedings.mlr.press/v258/chetelat25a.html.

Related Material