[edit]
Self-supervised Example Difficulty Balancing for Local Descriptor Learning
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1654-1669, 2024.
Abstract
In scenarios where there is an imbalance between positive and negative examples, hard example mining strategies have been shown to improve recognition performance by assisting models in distinguishing subtle differences between positive and negative examples. However, overly strict mining strategies may introduce false negative examples, while implementing the mining strategy can disrupt the difficulty distribution of examples in the real dataset and cause overfitting on difficult examples in the model. Therefore, in this paper, we explore how to balance the difficulty of mined examples in order to obtain and exploit high-quality negative examples, and try to solve the problem in terms of both loss function and training strategy. The proposed balance loss provides an effective discriminant for the quality of negative examples by incorporating a self-supervised approach into the loss function, employing dynamic gradient modulation to achieve finer adjustment for examples of different difficulties. The proposed annealing training strategy constrains the difficulty of negative examples drawn from mining and uses examples of decreasing difficulty to mitigate the overfitting issue of hard negative examples in training. Extensive experiments demonstrate that our new sparse descriptors outperform previously established state-of-the-art sparse descriptors.