Adding seemingly uninformative labels helps in low data regimes

Christos Matsoukas, Albert Bou Hernandez, Yue Liu, Karin Dembrower, Gisele Miranda, Emir Konuk, Johan Fredin Haslum, Athanasios Zouzos, Peter Lindholm, Fredrik Strand, Kevin Smith
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6775-6784, 2020.

Abstract

Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce - is there an advantage to learning with additional labels in low-data regimes? In this work, we consider a task that requires difficult-to-obtain expert annotations: tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-matsoukas20a, title = {Adding seemingly uninformative labels helps in low data regimes}, author = {Matsoukas, Christos and Hernandez, Albert Bou and Liu, Yue and Dembrower, Karin and Miranda, Gisele and Konuk, Emir and Haslum, Johan Fredin and Zouzos, Athanasios and Lindholm, Peter and Strand, Fredrik and Smith, Kevin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6775--6784}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/matsoukas20a/matsoukas20a.pdf}, url = {https://proceedings.mlr.press/v119/matsoukas20a.html}, abstract = {Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce - is there an advantage to learning with additional labels in low-data regimes? In this work, we consider a task that requires difficult-to-obtain expert annotations: tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.} }
Endnote
%0 Conference Paper %T Adding seemingly uninformative labels helps in low data regimes %A Christos Matsoukas %A Albert Bou Hernandez %A Yue Liu %A Karin Dembrower %A Gisele Miranda %A Emir Konuk %A Johan Fredin Haslum %A Athanasios Zouzos %A Peter Lindholm %A Fredrik Strand %A Kevin Smith %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-matsoukas20a %I PMLR %P 6775--6784 %U https://proceedings.mlr.press/v119/matsoukas20a.html %V 119 %X Evidence suggests that networks trained on large datasets generalize well not solely because of the numerous training examples, but also class diversity which encourages learning of enriched features. This raises the question of whether this remains true when data is scarce - is there an advantage to learning with additional labels in low-data regimes? In this work, we consider a task that requires difficult-to-obtain expert annotations: tumor segmentation in mammography images. We show that, in low-data settings, performance can be improved by complementing the expert annotations with seemingly uninformative labels from non-expert annotators, turning the task into a multi-class problem. We reveal that these gains increase when less expert data is available, and uncover several interesting properties through further studies. We demonstrate our findings on CSAW-S, a new dataset that we introduce here, and confirm them on two public datasets.
APA
Matsoukas, C., Hernandez, A.B., Liu, Y., Dembrower, K., Miranda, G., Konuk, E., Haslum, J.F., Zouzos, A., Lindholm, P., Strand, F. & Smith, K.. (2020). Adding seemingly uninformative labels helps in low data regimes. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6775-6784 Available from https://proceedings.mlr.press/v119/matsoukas20a.html.

Related Material