SIGUA: Forgetting May Make Learning with Noisy Labels More Robust

Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, Masashi Sugiyama
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4006-4016, 2020.

Abstract

Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end. Although equipped with corrections for noisy labels, many learning methods in this area still suffer overfitting due to undesired memorization. In this paper, to relieve this issue, we propose stochastic integrated gradient underweighted ascent (SIGUA): in a mini-batch, we adopt gradient descent on good data as usual, and learning-rate-reduced gradient ascent on bad data; the proposal is a versatile approach where data goodness or badness is w.r.t. desired or undesired memorization given a base learning method. Technically, SIGUA pulls optimization back for generalization when their goals conflict with each other; philosophically, SIGUA shows forgetting undesired memorization can reinforce desired memorization. Experiments demonstrate that SIGUA successfully robustifies two typical base learning methods, so that their performance is often significantly improved.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-han20c, title = {{SIGUA}: Forgetting May Make Learning with Noisy Labels More Robust}, author = {Han, Bo and Niu, Gang and Yu, Xingrui and Yao, Quanming and Xu, Miao and Tsang, Ivor and Sugiyama, Masashi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4006--4016}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/han20c/han20c.pdf}, url = {https://proceedings.mlr.press/v119/han20c.html}, abstract = {Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end. Although equipped with corrections for noisy labels, many learning methods in this area still suffer overfitting due to undesired memorization. In this paper, to relieve this issue, we propose stochastic integrated gradient underweighted ascent (SIGUA): in a mini-batch, we adopt gradient descent on good data as usual, and learning-rate-reduced gradient ascent on bad data; the proposal is a versatile approach where data goodness or badness is w.r.t. desired or undesired memorization given a base learning method. Technically, SIGUA pulls optimization back for generalization when their goals conflict with each other; philosophically, SIGUA shows forgetting undesired memorization can reinforce desired memorization. Experiments demonstrate that SIGUA successfully robustifies two typical base learning methods, so that their performance is often significantly improved.} }
Endnote
%0 Conference Paper %T SIGUA: Forgetting May Make Learning with Noisy Labels More Robust %A Bo Han %A Gang Niu %A Xingrui Yu %A Quanming Yao %A Miao Xu %A Ivor Tsang %A Masashi Sugiyama %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-han20c %I PMLR %P 4006--4016 %U https://proceedings.mlr.press/v119/han20c.html %V 119 %X Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end. Although equipped with corrections for noisy labels, many learning methods in this area still suffer overfitting due to undesired memorization. In this paper, to relieve this issue, we propose stochastic integrated gradient underweighted ascent (SIGUA): in a mini-batch, we adopt gradient descent on good data as usual, and learning-rate-reduced gradient ascent on bad data; the proposal is a versatile approach where data goodness or badness is w.r.t. desired or undesired memorization given a base learning method. Technically, SIGUA pulls optimization back for generalization when their goals conflict with each other; philosophically, SIGUA shows forgetting undesired memorization can reinforce desired memorization. Experiments demonstrate that SIGUA successfully robustifies two typical base learning methods, so that their performance is often significantly improved.
APA
Han, B., Niu, G., Yu, X., Yao, Q., Xu, M., Tsang, I. & Sugiyama, M.. (2020). SIGUA: Forgetting May Make Learning with Noisy Labels More Robust. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4006-4016 Available from https://proceedings.mlr.press/v119/han20c.html.

Related Material