[edit]
Trusted Loss Correction for Noisy Multi-Label Learning
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:343-358, 2023.
Abstract
Noisy and corrupted labels are shown to
significantly undermine the performance of
multi-label learning, which has multiple labels in
each image. Correcting the loss via a label
corruption matrix is effective in improving the
robustness of single-label classification against
noisy labels. However, estimating the corruption
matrix for multi-label problems is no mean feat due
to the unbalanced distributions of labels and the
presence of multiple objects that may be mapped into
the same labels. In this paper, we propose a robust
multi-label classifier against label noise, TLCM,
which corrects the loss based on a corruption matrix
estimated on trusted data. To overcome the challenge
of unbalanced label distribution and multi-object
mapping, we use trusted single-label data as
regulators to correct the multi-label corruption
matrix. Empirical evaluation on real-world vision
and object detection datasets, i.e., MS-COCO,
NUS-WIDE, and MIRFLICKR, shows that our method under
medium (30%) and high (60%) corruption levels
outperforms state-of-the-art multi-label classifier
(ASL) and noise-resilient multi-label classifier
(MPVAE), by on average 12.5% and 26.3% mean average
precision (mAP) points, respectively.