[edit]
Focused Anchors Loss: cost-sensitive learning of discriminative features for imbalanced classification
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:822-835, 2019.
Abstract
Deep Neural Networks (DNNs) usually suffer performance penalties when there is a skewed label distribution. This phenomenon, class-imbalance, is most often mitigated peripheral to the classification algorithm itself, usually by modifying the amount of examples per class, for oversampling at the expense of computational efficiency, and for undersampling at the expense of statistical efficiency. In our solution, we combine discriminative feature learning with cost-sensitive learning to tackle the class imbalance problem by using a two step loss function, which we call the Focused Anchors loss (FAL). We evaluate FAL and its variant, Focused Anchor Mean Loss (FAML), on $6$ different datasets in comparison of traditional cross entropy loss and we observe a significant gain in balanced accuracy for all datasets. We also perform better than time-costly re-sampling and ensemble methods like SMOTE and Near Miss in $4$ out of $6$ datasets across F1-score, AUC-ROC and balanced accuracy. We also extend our evaluation to image domain and use long-tailed CIFAR$10$ to evaluate our loss function where we consistently report significant improvement in accuracy. We then go on to test our loss function under extreme imbalance on a propriety dataset and achieve a gain of $0.1$ AUC-ROC over the baseline.