Prototype-Anchored Learning for Learning with Imperfect Annotations

Xiong Zhou, Xianming Liu, Deming Zhai, Junjun Jiang, Xin Gao, Xiangyang Ji
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:27245-27267, 2022.

Abstract

The success of deep neural networks greatly relies on the availability of large amounts of high-quality annotated data, which however are difficult or expensive to obtain. The resulting labels may be class imbalanced, noisy or human biased. It is challenging to learn unbiased classification models from imperfectly annotated datasets, on which we usually suffer from overfitting or underfitting. In this work, we thoroughly investigate the popular softmax loss and margin-based loss, and offer a feasible approach to tighten the generalization error bound by maximizing the minimal sample margin. We further derive the optimality condition for this purpose, which indicates how the class prototypes should be anchored. Motivated by theoretical analysis, we propose a simple yet effective method, namely prototype-anchored learning (PAL), which can be easily incorporated into various learning-based classification schemes to handle imperfect annotation. We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhou22f, title = {Prototype-Anchored Learning for Learning with Imperfect Annotations}, author = {Zhou, Xiong and Liu, Xianming and Zhai, Deming and Jiang, Junjun and Gao, Xin and Ji, Xiangyang}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {27245--27267}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhou22f/zhou22f.pdf}, url = {https://proceedings.mlr.press/v162/zhou22f.html}, abstract = {The success of deep neural networks greatly relies on the availability of large amounts of high-quality annotated data, which however are difficult or expensive to obtain. The resulting labels may be class imbalanced, noisy or human biased. It is challenging to learn unbiased classification models from imperfectly annotated datasets, on which we usually suffer from overfitting or underfitting. In this work, we thoroughly investigate the popular softmax loss and margin-based loss, and offer a feasible approach to tighten the generalization error bound by maximizing the minimal sample margin. We further derive the optimality condition for this purpose, which indicates how the class prototypes should be anchored. Motivated by theoretical analysis, we propose a simple yet effective method, namely prototype-anchored learning (PAL), which can be easily incorporated into various learning-based classification schemes to handle imperfect annotation. We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.} }
Endnote
%0 Conference Paper %T Prototype-Anchored Learning for Learning with Imperfect Annotations %A Xiong Zhou %A Xianming Liu %A Deming Zhai %A Junjun Jiang %A Xin Gao %A Xiangyang Ji %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhou22f %I PMLR %P 27245--27267 %U https://proceedings.mlr.press/v162/zhou22f.html %V 162 %X The success of deep neural networks greatly relies on the availability of large amounts of high-quality annotated data, which however are difficult or expensive to obtain. The resulting labels may be class imbalanced, noisy or human biased. It is challenging to learn unbiased classification models from imperfectly annotated datasets, on which we usually suffer from overfitting or underfitting. In this work, we thoroughly investigate the popular softmax loss and margin-based loss, and offer a feasible approach to tighten the generalization error bound by maximizing the minimal sample margin. We further derive the optimality condition for this purpose, which indicates how the class prototypes should be anchored. Motivated by theoretical analysis, we propose a simple yet effective method, namely prototype-anchored learning (PAL), which can be easily incorporated into various learning-based classification schemes to handle imperfect annotation. We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.
APA
Zhou, X., Liu, X., Zhai, D., Jiang, J., Gao, X. & Ji, X.. (2022). Prototype-Anchored Learning for Learning with Imperfect Annotations. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:27245-27267 Available from https://proceedings.mlr.press/v162/zhou22f.html.

Related Material