Deep k-NN for Noisy Labels

Dara Bahri, Heinrich Jiang, Maya Gupta
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:540-550, 2020.

Abstract

Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than many recently proposed methods. We also provide new statistical guarantees into its efficacy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-bahri20a, title = {Deep k-{NN} for Noisy Labels}, author = {Bahri, Dara and Jiang, Heinrich and Gupta, Maya}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {540--550}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/bahri20a/bahri20a.pdf}, url = {https://proceedings.mlr.press/v119/bahri20a.html}, abstract = {Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than many recently proposed methods. We also provide new statistical guarantees into its efficacy.} }
Endnote
%0 Conference Paper %T Deep k-NN for Noisy Labels %A Dara Bahri %A Heinrich Jiang %A Maya Gupta %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-bahri20a %I PMLR %P 540--550 %U https://proceedings.mlr.press/v119/bahri20a.html %V 119 %X Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than many recently proposed methods. We also provide new statistical guarantees into its efficacy.
APA
Bahri, D., Jiang, H. & Gupta, M.. (2020). Deep k-NN for Noisy Labels. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:540-550 Available from https://proceedings.mlr.press/v119/bahri20a.html.

Related Material