Learning Deep Neural Networks under Agnostic Corrupted Supervision

Boyang Liu, Mengying Sun, Ding Wang, Pang-Ning Tan, Jiayu Zhou
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6957-6967, 2021.

Abstract

Training deep neural network models in the presence of corrupted supervision is challenging as the corrupted data points may significantly impact generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption and provides a unified framework for both classification and regression problems. Unlike many existing approaches that quantify the quality of the data points (e.g., based on their individual loss values), and filter them accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the average gradient. Even when a corrupted data point failed to be excluded by our algorithm, the data point will have a very limited impact on the overall loss, as compared with state-of-the-art filtering methods based on loss values. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of our algorithm under different types of corruption. Our code is available at \url{https://github.com/illidanlab/PRL}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-liu21v, title = {Learning Deep Neural Networks under Agnostic Corrupted Supervision}, author = {Liu, Boyang and Sun, Mengying and Wang, Ding and Tan, Pang-Ning and Zhou, Jiayu}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6957--6967}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/liu21v/liu21v.pdf}, url = {https://proceedings.mlr.press/v139/liu21v.html}, abstract = {Training deep neural network models in the presence of corrupted supervision is challenging as the corrupted data points may significantly impact generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption and provides a unified framework for both classification and regression problems. Unlike many existing approaches that quantify the quality of the data points (e.g., based on their individual loss values), and filter them accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the average gradient. Even when a corrupted data point failed to be excluded by our algorithm, the data point will have a very limited impact on the overall loss, as compared with state-of-the-art filtering methods based on loss values. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of our algorithm under different types of corruption. Our code is available at \url{https://github.com/illidanlab/PRL}.} }
Endnote
%0 Conference Paper %T Learning Deep Neural Networks under Agnostic Corrupted Supervision %A Boyang Liu %A Mengying Sun %A Ding Wang %A Pang-Ning Tan %A Jiayu Zhou %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-liu21v %I PMLR %P 6957--6967 %U https://proceedings.mlr.press/v139/liu21v.html %V 139 %X Training deep neural network models in the presence of corrupted supervision is challenging as the corrupted data points may significantly impact generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption and provides a unified framework for both classification and regression problems. Unlike many existing approaches that quantify the quality of the data points (e.g., based on their individual loss values), and filter them accordingly, the proposed algorithm focuses on controlling the collective impact of data points on the average gradient. Even when a corrupted data point failed to be excluded by our algorithm, the data point will have a very limited impact on the overall loss, as compared with state-of-the-art filtering methods based on loss values. Extensive experiments on multiple benchmark datasets have demonstrated the robustness of our algorithm under different types of corruption. Our code is available at \url{https://github.com/illidanlab/PRL}.
APA
Liu, B., Sun, M., Wang, D., Tan, P. & Zhou, J.. (2021). Learning Deep Neural Networks under Agnostic Corrupted Supervision. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6957-6967 Available from https://proceedings.mlr.press/v139/liu21v.html.

Related Material