TruthFlow: Truthful LLM Generation via Representation Flow Correction

Hanyu Wang, Bochuan Cao, Yuanpu Cao, Jinghui Chen
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:62423-62444, 2025.

Abstract

Large language models (LLMs) are known to struggle with consistently generating truthful responses. While various representation intervention techniques have been proposed, these methods typically apply a universal representation correction vector to all input queries, limiting their effectiveness against diverse queries in practice. In this study, we introduce TruthFlow, a novel method that leverages the Flow Matching technique for query-specific truthful representation correction. Specifically, TruthFlow first uses a flow model to learn query-specific correction vectors that transition representations from hallucinated to truthful states. Then, during inference, the trained flow model generates these correction vectors to enhance the truthfulness of LLM outputs. Experimental results demonstrate that TruthFlow significantly improves performance on open-ended generation tasks across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained TruthFlow model exhibits strong transferability, performing effectively on other unseen hallucination benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wang25i, title = {{T}ruth{F}low: Truthful {LLM} Generation via Representation Flow Correction}, author = {Wang, Hanyu and Cao, Bochuan and Cao, Yuanpu and Chen, Jinghui}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {62423--62444}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wang25i/wang25i.pdf}, url = {https://proceedings.mlr.press/v267/wang25i.html}, abstract = {Large language models (LLMs) are known to struggle with consistently generating truthful responses. While various representation intervention techniques have been proposed, these methods typically apply a universal representation correction vector to all input queries, limiting their effectiveness against diverse queries in practice. In this study, we introduce TruthFlow, a novel method that leverages the Flow Matching technique for query-specific truthful representation correction. Specifically, TruthFlow first uses a flow model to learn query-specific correction vectors that transition representations from hallucinated to truthful states. Then, during inference, the trained flow model generates these correction vectors to enhance the truthfulness of LLM outputs. Experimental results demonstrate that TruthFlow significantly improves performance on open-ended generation tasks across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained TruthFlow model exhibits strong transferability, performing effectively on other unseen hallucination benchmarks.} }
Endnote
%0 Conference Paper %T TruthFlow: Truthful LLM Generation via Representation Flow Correction %A Hanyu Wang %A Bochuan Cao %A Yuanpu Cao %A Jinghui Chen %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wang25i %I PMLR %P 62423--62444 %U https://proceedings.mlr.press/v267/wang25i.html %V 267 %X Large language models (LLMs) are known to struggle with consistently generating truthful responses. While various representation intervention techniques have been proposed, these methods typically apply a universal representation correction vector to all input queries, limiting their effectiveness against diverse queries in practice. In this study, we introduce TruthFlow, a novel method that leverages the Flow Matching technique for query-specific truthful representation correction. Specifically, TruthFlow first uses a flow model to learn query-specific correction vectors that transition representations from hallucinated to truthful states. Then, during inference, the trained flow model generates these correction vectors to enhance the truthfulness of LLM outputs. Experimental results demonstrate that TruthFlow significantly improves performance on open-ended generation tasks across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained TruthFlow model exhibits strong transferability, performing effectively on other unseen hallucination benchmarks.
APA
Wang, H., Cao, B., Cao, Y. & Chen, J.. (2025). TruthFlow: Truthful LLM Generation via Representation Flow Correction. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:62423-62444 Available from https://proceedings.mlr.press/v267/wang25i.html.

Related Material