[edit]
Multi Label Loss Correction against Missing and Corrupted Labels
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:359-374, 2023.
Abstract
Missing and corrupted labels can significantly ruin
the learning process and, consequently, the
classifier performance. Multi-label learning where
each instance is tagged with variable number of
labels is particularly affected. Although missing
labels (false-negatives) is a well-studied problem
in multi-label learning, it is considerably more
challenging to have both false-negatives (missing
labels) and false-positives (corrupted labels)
simultaneously in multi-label datasets. In this
paper, we propose Multi-Label Loss with Self
Correction (MLLSC) which is a loss robust against
coincident missing and corrupted labels. MLLSC
computes the loss based on the true-positive
(true-negative) or false-positive (false-negative)
labels and deep neural network expertise. To
distinguish between false-positive (false-negative)
and true-positive (true-negative) labels, we use the
output probability of the deep neural network during
the learning process. Our method As MLLSC can be
combined with different types of multi-label loss
functions, we also address the label imbalance
problem of multi-label datasets. Empirical
evaluation on real-world vision datasets, i.e.,
MS-COCO, and MIR-FLICKR, shows that our method under
medium (0.3) and high (0.6) corrupted and missing
label probabilities outperform the state-of-the-art
methods by, on average 23.97% and 9.31% mean average
precision (mAP) points, respectively.