Self-supervised Example Difficulty Balancing for Local Descriptor Learning

Jiahan Zhang, Dayong Tian, Tianyang Wu, Yiqing Cao, Yaoqi Du, Yiwen Wei
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:1654-1669, 2024.

Abstract

In scenarios where there is an imbalance between positive and negative examples, hard example mining strategies have been shown to improve recognition performance by assisting models in distinguishing subtle differences between positive and negative examples. However, overly strict mining strategies may introduce false negative examples, while implementing the mining strategy can disrupt the difficulty distribution of examples in the real dataset and cause overfitting on difficult examples in the model. Therefore, in this paper, we explore how to balance the difficulty of mined examples in order to obtain and exploit high-quality negative examples, and try to solve the problem in terms of both loss function and training strategy. The proposed balance loss provides an effective discriminant for the quality of negative examples by incorporating a self-supervised approach into the loss function, employing dynamic gradient modulation to achieve finer adjustment for examples of different difficulties. The proposed annealing training strategy constrains the difficulty of negative examples drawn from mining and uses examples of decreasing difficulty to mitigate the overfitting issue of hard negative examples in training. Extensive experiments demonstrate that our new sparse descriptors outperform previously established state-of-the-art sparse descriptors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-zhang24c, title = {Self-supervised Example Difficulty Balancing for Local Descriptor Learning}, author = {Zhang, Jiahan and Tian, Dayong and Wu, Tianyang and Cao, Yiqing and Du, Yaoqi and Wei, Yiwen}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {1654--1669}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/zhang24c/zhang24c.pdf}, url = {https://proceedings.mlr.press/v222/zhang24c.html}, abstract = {In scenarios where there is an imbalance between positive and negative examples, hard example mining strategies have been shown to improve recognition performance by assisting models in distinguishing subtle differences between positive and negative examples. However, overly strict mining strategies may introduce false negative examples, while implementing the mining strategy can disrupt the difficulty distribution of examples in the real dataset and cause overfitting on difficult examples in the model. Therefore, in this paper, we explore how to balance the difficulty of mined examples in order to obtain and exploit high-quality negative examples, and try to solve the problem in terms of both loss function and training strategy. The proposed balance loss provides an effective discriminant for the quality of negative examples by incorporating a self-supervised approach into the loss function, employing dynamic gradient modulation to achieve finer adjustment for examples of different difficulties. The proposed annealing training strategy constrains the difficulty of negative examples drawn from mining and uses examples of decreasing difficulty to mitigate the overfitting issue of hard negative examples in training. Extensive experiments demonstrate that our new sparse descriptors outperform previously established state-of-the-art sparse descriptors.} }
Endnote
%0 Conference Paper %T Self-supervised Example Difficulty Balancing for Local Descriptor Learning %A Jiahan Zhang %A Dayong Tian %A Tianyang Wu %A Yiqing Cao %A Yaoqi Du %A Yiwen Wei %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-zhang24c %I PMLR %P 1654--1669 %U https://proceedings.mlr.press/v222/zhang24c.html %V 222 %X In scenarios where there is an imbalance between positive and negative examples, hard example mining strategies have been shown to improve recognition performance by assisting models in distinguishing subtle differences between positive and negative examples. However, overly strict mining strategies may introduce false negative examples, while implementing the mining strategy can disrupt the difficulty distribution of examples in the real dataset and cause overfitting on difficult examples in the model. Therefore, in this paper, we explore how to balance the difficulty of mined examples in order to obtain and exploit high-quality negative examples, and try to solve the problem in terms of both loss function and training strategy. The proposed balance loss provides an effective discriminant for the quality of negative examples by incorporating a self-supervised approach into the loss function, employing dynamic gradient modulation to achieve finer adjustment for examples of different difficulties. The proposed annealing training strategy constrains the difficulty of negative examples drawn from mining and uses examples of decreasing difficulty to mitigate the overfitting issue of hard negative examples in training. Extensive experiments demonstrate that our new sparse descriptors outperform previously established state-of-the-art sparse descriptors.
APA
Zhang, J., Tian, D., Wu, T., Cao, Y., Du, Y. & Wei, Y.. (2024). Self-supervised Example Difficulty Balancing for Local Descriptor Learning. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:1654-1669 Available from https://proceedings.mlr.press/v222/zhang24c.html.

Related Material