[edit]
Controlling Imbalanced Error in Deep Learning with the Log Bilinear Loss
Proceedings of the First International Workshop on Learning with Imbalanced Domains: Theory and Applications, PMLR 74:141-151, 2017.
Abstract
Deep learning has become the method of choice for many machine learning tasks in recent years, and especially for multi-class classification. The most common loss function used in this context is the cross-entropy loss. While this function is insensitive to the identity of the assigned class in the case of misclassification, in practice it very common to have imbalanced sensitivity to error, meaning some wrong assignments are much worse than others. Here we present the bilinear-loss (and related log-bilinear-loss) which differentially penalizes the different wrong assignments of the model. We thoroughly test the proposed method using standard models and benchmark image datasets.