A Spectral Perspective of DNN Robustness to Label Noise

Oshrat Bar, Amnon Drory, Raja Giryes
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3732-3752, 2022.

Abstract

Deep networks usually require a massive amount of labeled data for their training. Yet, such data may include some mistakes in the labels. Interestingly, networks have been shown to be robust to such errors. This work uses spectral analysis of their learned mapping to provide an explanation for their robustness. In particular, we relate the smoothness regularization that usually exists in conventional training to the attenuation of high frequencies, which mainly characterize noise. By using a connection between the smoothness and the spectral norm of the network weights, we suggest that one may further improve robustness via spectral normalization. Empirical experiments validate our claims and show the advantage of this normalization for classification with label noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-bar22a, title = { A Spectral Perspective of DNN Robustness to Label Noise }, author = {Bar, Oshrat and Drory, Amnon and Giryes, Raja}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {3732--3752}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/bar22a/bar22a.pdf}, url = {https://proceedings.mlr.press/v151/bar22a.html}, abstract = { Deep networks usually require a massive amount of labeled data for their training. Yet, such data may include some mistakes in the labels. Interestingly, networks have been shown to be robust to such errors. This work uses spectral analysis of their learned mapping to provide an explanation for their robustness. In particular, we relate the smoothness regularization that usually exists in conventional training to the attenuation of high frequencies, which mainly characterize noise. By using a connection between the smoothness and the spectral norm of the network weights, we suggest that one may further improve robustness via spectral normalization. Empirical experiments validate our claims and show the advantage of this normalization for classification with label noise. } }
Endnote
%0 Conference Paper %T A Spectral Perspective of DNN Robustness to Label Noise %A Oshrat Bar %A Amnon Drory %A Raja Giryes %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-bar22a %I PMLR %P 3732--3752 %U https://proceedings.mlr.press/v151/bar22a.html %V 151 %X Deep networks usually require a massive amount of labeled data for their training. Yet, such data may include some mistakes in the labels. Interestingly, networks have been shown to be robust to such errors. This work uses spectral analysis of their learned mapping to provide an explanation for their robustness. In particular, we relate the smoothness regularization that usually exists in conventional training to the attenuation of high frequencies, which mainly characterize noise. By using a connection between the smoothness and the spectral norm of the network weights, we suggest that one may further improve robustness via spectral normalization. Empirical experiments validate our claims and show the advantage of this normalization for classification with label noise.
APA
Bar, O., Drory, A. & Giryes, R.. (2022). A Spectral Perspective of DNN Robustness to Label Noise . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:3732-3752 Available from https://proceedings.mlr.press/v151/bar22a.html.

Related Material