TempTest: Local Normalization Distortion and the Detection of Machine-generated Text

Tom Kempton, Stuart Burrell, Connor J Cheverall
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1972-1980, 2025.

Abstract

Existing methods for the zero-shot detection of machine-generated text are dominated by three statistical quantities: log-likelihood, log rank, and entropy. As language models mimic the distribution of human text ever closer, this will limit our ability to build effective detection algorithms. To combat this, we introduce a method for detecting machine-generated text that is entirely agnostic of the generating language model. This is achieved by targeting a defect in the way that decoding strategies, such as temperature or top-k sampling, normalize conditional probability measures. This method can be rigorously theoretically justified, is easily explainable, and is conceptually distinct from existing methods for detecting machine-generated text. We evaluate our detector in the white and black box settings across various language models, datasets, and passage lengths. We also study the effect of paraphrasing attacks on our detector and the extent to which it is biased against non-native speakers. In each of these settings, the performance of our test is at least comparable to that of other state-of-the-art text detectors, and in some cases, we strongly outperform these baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-kempton25a, title = {TempTest: Local Normalization Distortion and the Detection of Machine-generated Text}, author = {Kempton, Tom and Burrell, Stuart and Cheverall, Connor J}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1972--1980}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/kempton25a/kempton25a.pdf}, url = {https://proceedings.mlr.press/v258/kempton25a.html}, abstract = {Existing methods for the zero-shot detection of machine-generated text are dominated by three statistical quantities: log-likelihood, log rank, and entropy. As language models mimic the distribution of human text ever closer, this will limit our ability to build effective detection algorithms. To combat this, we introduce a method for detecting machine-generated text that is entirely agnostic of the generating language model. This is achieved by targeting a defect in the way that decoding strategies, such as temperature or top-k sampling, normalize conditional probability measures. This method can be rigorously theoretically justified, is easily explainable, and is conceptually distinct from existing methods for detecting machine-generated text. We evaluate our detector in the white and black box settings across various language models, datasets, and passage lengths. We also study the effect of paraphrasing attacks on our detector and the extent to which it is biased against non-native speakers. In each of these settings, the performance of our test is at least comparable to that of other state-of-the-art text detectors, and in some cases, we strongly outperform these baselines.} }
Endnote
%0 Conference Paper %T TempTest: Local Normalization Distortion and the Detection of Machine-generated Text %A Tom Kempton %A Stuart Burrell %A Connor J Cheverall %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-kempton25a %I PMLR %P 1972--1980 %U https://proceedings.mlr.press/v258/kempton25a.html %V 258 %X Existing methods for the zero-shot detection of machine-generated text are dominated by three statistical quantities: log-likelihood, log rank, and entropy. As language models mimic the distribution of human text ever closer, this will limit our ability to build effective detection algorithms. To combat this, we introduce a method for detecting machine-generated text that is entirely agnostic of the generating language model. This is achieved by targeting a defect in the way that decoding strategies, such as temperature or top-k sampling, normalize conditional probability measures. This method can be rigorously theoretically justified, is easily explainable, and is conceptually distinct from existing methods for detecting machine-generated text. We evaluate our detector in the white and black box settings across various language models, datasets, and passage lengths. We also study the effect of paraphrasing attacks on our detector and the extent to which it is biased against non-native speakers. In each of these settings, the performance of our test is at least comparable to that of other state-of-the-art text detectors, and in some cases, we strongly outperform these baselines.
APA
Kempton, T., Burrell, S. & Cheverall, C.J.. (2025). TempTest: Local Normalization Distortion and the Detection of Machine-generated Text. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1972-1980 Available from https://proceedings.mlr.press/v258/kempton25a.html.

Related Material