The Entropy Enigma: Success and Failure of Entropy Minimization

Ori Press, Ravid Shwartz-Ziv, Yann Lecun, Matthias Bethge
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:41064-41085, 2024.

Abstract

Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they’re faced with new data at test time. EM is a self-supervised learning method that optimizes classifiers to assign even higher probabilities to their top predicted classes. In this paper, we analyze why EM works when adapting a model for a few steps and why it eventually fails after adapting for many steps. We show that, at first, EM causes the model to embed test images close to training images, thereby increasing model accuracy. After many steps of optimization, EM makes the model embed test images far away from the embeddings of training images, which results in a degradation of accuracy. Building upon our insights, we present a method for solving a practical problem: estimating a model’s accuracy on a given arbitrary dataset without having access to its labels. Our method estimates accuracy by looking at how the embeddings of input images change as the model is optimized to minimize entropy. Experiments on 23 challenging datasets show that our method sets the SoTA with a mean absolute error of 5.75%, an improvement of 29.62% over the previous SoTA on this task. Our code is available at: https://github.com/oripress/EntropyEnigma

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-press24a, title = {The Entropy Enigma: Success and Failure of Entropy Minimization}, author = {Press, Ori and Shwartz-Ziv, Ravid and Lecun, Yann and Bethge, Matthias}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {41064--41085}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/press24a/press24a.pdf}, url = {https://proceedings.mlr.press/v235/press24a.html}, abstract = {Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they’re faced with new data at test time. EM is a self-supervised learning method that optimizes classifiers to assign even higher probabilities to their top predicted classes. In this paper, we analyze why EM works when adapting a model for a few steps and why it eventually fails after adapting for many steps. We show that, at first, EM causes the model to embed test images close to training images, thereby increasing model accuracy. After many steps of optimization, EM makes the model embed test images far away from the embeddings of training images, which results in a degradation of accuracy. Building upon our insights, we present a method for solving a practical problem: estimating a model’s accuracy on a given arbitrary dataset without having access to its labels. Our method estimates accuracy by looking at how the embeddings of input images change as the model is optimized to minimize entropy. Experiments on 23 challenging datasets show that our method sets the SoTA with a mean absolute error of 5.75%, an improvement of 29.62% over the previous SoTA on this task. Our code is available at: https://github.com/oripress/EntropyEnigma} }
Endnote
%0 Conference Paper %T The Entropy Enigma: Success and Failure of Entropy Minimization %A Ori Press %A Ravid Shwartz-Ziv %A Yann Lecun %A Matthias Bethge %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-press24a %I PMLR %P 41064--41085 %U https://proceedings.mlr.press/v235/press24a.html %V 235 %X Entropy minimization (EM) is frequently used to increase the accuracy of classification models when they’re faced with new data at test time. EM is a self-supervised learning method that optimizes classifiers to assign even higher probabilities to their top predicted classes. In this paper, we analyze why EM works when adapting a model for a few steps and why it eventually fails after adapting for many steps. We show that, at first, EM causes the model to embed test images close to training images, thereby increasing model accuracy. After many steps of optimization, EM makes the model embed test images far away from the embeddings of training images, which results in a degradation of accuracy. Building upon our insights, we present a method for solving a practical problem: estimating a model’s accuracy on a given arbitrary dataset without having access to its labels. Our method estimates accuracy by looking at how the embeddings of input images change as the model is optimized to minimize entropy. Experiments on 23 challenging datasets show that our method sets the SoTA with a mean absolute error of 5.75%, an improvement of 29.62% over the previous SoTA on this task. Our code is available at: https://github.com/oripress/EntropyEnigma
APA
Press, O., Shwartz-Ziv, R., Lecun, Y. & Bethge, M.. (2024). The Entropy Enigma: Success and Failure of Entropy Minimization. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:41064-41085 Available from https://proceedings.mlr.press/v235/press24a.html.

Related Material