On the informativeness of supervision signals

Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:2036-2046, 2023.

Abstract

Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more information than sparse annotations (like hard labels), they are also more expensive to collect. For example, while hard labels only provide information about the closest class an object belongs to (e.g., “this is a dog”), soft labels provide information about the object’s relationship with multiple classes (e.g., “this is most likely a dog, but it could also be a wolf or a coyote”). We use information theory to compare how a number of commonly-used supervision signals contribute to representation-learning performance, as well as how their capacity is affected by factors such as the number of labels, classes, dimensions, and noise. Our framework provides theoretical justification for using hard labels in the big-data regime, but richer supervision signals for few-shot learning and out-of-distribution generalization. We validate these results empirically in a series of experiments with over 1 million crowdsourced image annotations and conduct a cost-benefit analysis to establish a tradeoff curve that enables users to optimize the cost of supervising representation learning on their own datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-sucholutsky23a, title = {On the informativeness of supervision signals}, author = {Sucholutsky, Ilia and Battleday, Ruairidh M. and Collins, Katherine M. and Marjieh, Raja and Peterson, Joshua and Singh, Pulkit and Bhatt, Umang and Jacoby, Nori and Weller, Adrian and Griffiths, Thomas L.}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {2036--2046}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/sucholutsky23a/sucholutsky23a.pdf}, url = {https://proceedings.mlr.press/v216/sucholutsky23a.html}, abstract = {Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more information than sparse annotations (like hard labels), they are also more expensive to collect. For example, while hard labels only provide information about the closest class an object belongs to (e.g., “this is a dog”), soft labels provide information about the object’s relationship with multiple classes (e.g., “this is most likely a dog, but it could also be a wolf or a coyote”). We use information theory to compare how a number of commonly-used supervision signals contribute to representation-learning performance, as well as how their capacity is affected by factors such as the number of labels, classes, dimensions, and noise. Our framework provides theoretical justification for using hard labels in the big-data regime, but richer supervision signals for few-shot learning and out-of-distribution generalization. We validate these results empirically in a series of experiments with over 1 million crowdsourced image annotations and conduct a cost-benefit analysis to establish a tradeoff curve that enables users to optimize the cost of supervising representation learning on their own datasets.} }
Endnote
%0 Conference Paper %T On the informativeness of supervision signals %A Ilia Sucholutsky %A Ruairidh M. Battleday %A Katherine M. Collins %A Raja Marjieh %A Joshua Peterson %A Pulkit Singh %A Umang Bhatt %A Nori Jacoby %A Adrian Weller %A Thomas L. Griffiths %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-sucholutsky23a %I PMLR %P 2036--2046 %U https://proceedings.mlr.press/v216/sucholutsky23a.html %V 216 %X Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more information than sparse annotations (like hard labels), they are also more expensive to collect. For example, while hard labels only provide information about the closest class an object belongs to (e.g., “this is a dog”), soft labels provide information about the object’s relationship with multiple classes (e.g., “this is most likely a dog, but it could also be a wolf or a coyote”). We use information theory to compare how a number of commonly-used supervision signals contribute to representation-learning performance, as well as how their capacity is affected by factors such as the number of labels, classes, dimensions, and noise. Our framework provides theoretical justification for using hard labels in the big-data regime, but richer supervision signals for few-shot learning and out-of-distribution generalization. We validate these results empirically in a series of experiments with over 1 million crowdsourced image annotations and conduct a cost-benefit analysis to establish a tradeoff curve that enables users to optimize the cost of supervising representation learning on their own datasets.
APA
Sucholutsky, I., Battleday, R.M., Collins, K.M., Marjieh, R., Peterson, J., Singh, P., Bhatt, U., Jacoby, N., Weller, A. & Griffiths, T.L.. (2023). On the informativeness of supervision signals. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:2036-2046 Available from https://proceedings.mlr.press/v216/sucholutsky23a.html.

Related Material