Deep Similarity Learning Loss Functions in Data Transformation for Class Imbalance

Damian Horna, Mateusz Lango, Jerzy Stefanowski
Proceedings of the Fifth International Workshop on Learning with Imbalanced Domains: Theory and Applications, PMLR 241:1-15, 2024.

Abstract

Improving the classification of multi-class imbalanced data is more difficult than its two- class counterpart. In this paper, we use deep neural networks to train new representations of tabular multi-class data. Unlike the typically developed re-sampling pre-processing meth- ods, our proposal modifies the distribution of features, i.e. the positions of examples in the learned embedded representation, and it does not modify the class sizes. To learn such embedded representations we introduced various definitions of triplet loss functions: the simplest one uses weights related to the degree of class imbalance, while the next pro- posals are intended for more complex distributions of examples and aim to generate a safe neighborhood of minority examples. Similarly to the resampling approaches, after applying such preprocessing, different classifiers can be trained on new representations. Experiments with popular multi-class imbalanced benchmark data sets and three classifiers showed the advantage of the proposed approach over popular pre-processing methods as well as basic versions of neural networks with classical loss function formulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v241-horna24a, title = {Deep Similarity Learning Loss Functions in Data Transformation for Class Imbalance}, author = {Horna, Damian and Lango, Mateusz and Stefanowski, Jerzy}, booktitle = {Proceedings of the Fifth International Workshop on Learning with Imbalanced Domains: Theory and Applications}, pages = {1--15}, year = {2024}, editor = {Moniz, Nuno and Branco, Paula and Torgo, Luis and Japkowicz, Nathalie and Wozniak, Michal and Wang, Shuo}, volume = {241}, series = {Proceedings of Machine Learning Research}, month = {18 Sep}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v241/horna24a/horna24a.pdf}, url = {https://proceedings.mlr.press/v241/horna24a.html}, abstract = {Improving the classification of multi-class imbalanced data is more difficult than its two- class counterpart. In this paper, we use deep neural networks to train new representations of tabular multi-class data. Unlike the typically developed re-sampling pre-processing meth- ods, our proposal modifies the distribution of features, i.e. the positions of examples in the learned embedded representation, and it does not modify the class sizes. To learn such embedded representations we introduced various definitions of triplet loss functions: the simplest one uses weights related to the degree of class imbalance, while the next pro- posals are intended for more complex distributions of examples and aim to generate a safe neighborhood of minority examples. Similarly to the resampling approaches, after applying such preprocessing, different classifiers can be trained on new representations. Experiments with popular multi-class imbalanced benchmark data sets and three classifiers showed the advantage of the proposed approach over popular pre-processing methods as well as basic versions of neural networks with classical loss function formulations.} }
Endnote
%0 Conference Paper %T Deep Similarity Learning Loss Functions in Data Transformation for Class Imbalance %A Damian Horna %A Mateusz Lango %A Jerzy Stefanowski %B Proceedings of the Fifth International Workshop on Learning with Imbalanced Domains: Theory and Applications %C Proceedings of Machine Learning Research %D 2024 %E Nuno Moniz %E Paula Branco %E Luis Torgo %E Nathalie Japkowicz %E Michal Wozniak %E Shuo Wang %F pmlr-v241-horna24a %I PMLR %P 1--15 %U https://proceedings.mlr.press/v241/horna24a.html %V 241 %X Improving the classification of multi-class imbalanced data is more difficult than its two- class counterpart. In this paper, we use deep neural networks to train new representations of tabular multi-class data. Unlike the typically developed re-sampling pre-processing meth- ods, our proposal modifies the distribution of features, i.e. the positions of examples in the learned embedded representation, and it does not modify the class sizes. To learn such embedded representations we introduced various definitions of triplet loss functions: the simplest one uses weights related to the degree of class imbalance, while the next pro- posals are intended for more complex distributions of examples and aim to generate a safe neighborhood of minority examples. Similarly to the resampling approaches, after applying such preprocessing, different classifiers can be trained on new representations. Experiments with popular multi-class imbalanced benchmark data sets and three classifiers showed the advantage of the proposed approach over popular pre-processing methods as well as basic versions of neural networks with classical loss function formulations.
APA
Horna, D., Lango, M. & Stefanowski, J.. (2024). Deep Similarity Learning Loss Functions in Data Transformation for Class Imbalance. Proceedings of the Fifth International Workshop on Learning with Imbalanced Domains: Theory and Applications, in Proceedings of Machine Learning Research 241:1-15 Available from https://proceedings.mlr.press/v241/horna24a.html.

Related Material