When does Privileged information Explain Away Label Noise?

Guillermo Ortiz-Jimenez, Mark Collier, Anant Nawalgaria, Alexander Nicholas D’Amour, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:26646-26669, 2023.

Abstract

Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from noisy data, while enabling a learning shortcut to memorize the noisy examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ortiz-jimenez23a, title = {When does Privileged information Explain Away Label Noise?}, author = {Ortiz-Jimenez, Guillermo and Collier, Mark and Nawalgaria, Anant and D'Amour, Alexander Nicholas and Berent, Jesse and Jenatton, Rodolphe and Kokiopoulou, Efi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {26646--26669}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ortiz-jimenez23a/ortiz-jimenez23a.pdf}, url = {https://proceedings.mlr.press/v202/ortiz-jimenez23a.html}, abstract = {Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from noisy data, while enabling a learning shortcut to memorize the noisy examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise.} }
Endnote
%0 Conference Paper %T When does Privileged information Explain Away Label Noise? %A Guillermo Ortiz-Jimenez %A Mark Collier %A Anant Nawalgaria %A Alexander Nicholas D’Amour %A Jesse Berent %A Rodolphe Jenatton %A Efi Kokiopoulou %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ortiz-jimenez23a %I PMLR %P 26646--26669 %U https://proceedings.mlr.press/v202/ortiz-jimenez23a.html %V 202 %X Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from noisy data, while enabling a learning shortcut to memorize the noisy examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise.
APA
Ortiz-Jimenez, G., Collier, M., Nawalgaria, A., D’Amour, A.N., Berent, J., Jenatton, R. & Kokiopoulou, E.. (2023). When does Privileged information Explain Away Label Noise?. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:26646-26669 Available from https://proceedings.mlr.press/v202/ortiz-jimenez23a.html.

Related Material