Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable Model

Francesco Saverio Pezzicoli, Valentina Ros, François P. Landes, Marco Baity-Jesi
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1261-1269, 2025.

Abstract

Class imbalance (CI) is a longstanding problem in machine learning, slowing down training and reducing performances. Although empirical remedies exist, it is often unclear which ones work best and when, due to the lack of an overarching theory. We address a common case of imbalance, that of anomaly (or outlier) detection. We provide a theoretical framework to analyze, interpret and address CI. It is based on an exact solution of the teacher-student perceptron model, through replica theory. Within this framework, one can distinguish several sources of CI: either intrinsic, train or test imbalance. Our analysis reveals that, depending on the specific problem setting, one source or another might dominate. We further find that the optimal train imbalance is generally different from 50%, with a non trivial dependence on the intrinsic imbalance, the abundance of data and on the noise in the learning. Moreover, there is a crossover between a small noise training regime where results are independent of the noise level to a high noise regime where performances quickly degrade with noise. Our results challenge some of the conventional wisdom on CI and pave the way for integrated approaches to the topic.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-pezzicoli25a, title = {Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable Model}, author = {Pezzicoli, Francesco Saverio and Ros, Valentina and Landes, Fran{\c{c}}ois P. and Baity-Jesi, Marco}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1261--1269}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/pezzicoli25a/pezzicoli25a.pdf}, url = {https://proceedings.mlr.press/v258/pezzicoli25a.html}, abstract = {Class imbalance (CI) is a longstanding problem in machine learning, slowing down training and reducing performances. Although empirical remedies exist, it is often unclear which ones work best and when, due to the lack of an overarching theory. We address a common case of imbalance, that of anomaly (or outlier) detection. We provide a theoretical framework to analyze, interpret and address CI. It is based on an exact solution of the teacher-student perceptron model, through replica theory. Within this framework, one can distinguish several sources of CI: either intrinsic, train or test imbalance. Our analysis reveals that, depending on the specific problem setting, one source or another might dominate. We further find that the optimal train imbalance is generally different from 50%, with a non trivial dependence on the intrinsic imbalance, the abundance of data and on the noise in the learning. Moreover, there is a crossover between a small noise training regime where results are independent of the noise level to a high noise regime where performances quickly degrade with noise. Our results challenge some of the conventional wisdom on CI and pave the way for integrated approaches to the topic.} }
Endnote
%0 Conference Paper %T Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable Model %A Francesco Saverio Pezzicoli %A Valentina Ros %A François P. Landes %A Marco Baity-Jesi %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-pezzicoli25a %I PMLR %P 1261--1269 %U https://proceedings.mlr.press/v258/pezzicoli25a.html %V 258 %X Class imbalance (CI) is a longstanding problem in machine learning, slowing down training and reducing performances. Although empirical remedies exist, it is often unclear which ones work best and when, due to the lack of an overarching theory. We address a common case of imbalance, that of anomaly (or outlier) detection. We provide a theoretical framework to analyze, interpret and address CI. It is based on an exact solution of the teacher-student perceptron model, through replica theory. Within this framework, one can distinguish several sources of CI: either intrinsic, train or test imbalance. Our analysis reveals that, depending on the specific problem setting, one source or another might dominate. We further find that the optimal train imbalance is generally different from 50%, with a non trivial dependence on the intrinsic imbalance, the abundance of data and on the noise in the learning. Moreover, there is a crossover between a small noise training regime where results are independent of the noise level to a high noise regime where performances quickly degrade with noise. Our results challenge some of the conventional wisdom on CI and pave the way for integrated approaches to the topic.
APA
Pezzicoli, F.S., Ros, V., Landes, F.P. & Baity-Jesi, M.. (2025). Class Imbalance in Anomaly Detection: Learning from an Exactly Solvable Model. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1261-1269 Available from https://proceedings.mlr.press/v258/pezzicoli25a.html.

Related Material