Distinguishing rule and exemplar-based generalization in learning systems

Ishita Dasgupta, Erin Grant, Tom Griffiths
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:4816-4830, 2022.

Abstract

Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations. The trade-off between exemplar- and rule-based generalization has been studied extensively in cognitive psychology; in this work, we present a protocol inspired by these experimental approaches to probe the inductive biases that control this trade-off in category-learning systems such as artificial neural networks. We isolate two such inductive biases: feature-level bias (differences in which features are more readily learned) and exemplar-vs-rule bias (differences in how these learned features are used for generalization of category labels). We find that standard neural network models are feature-biased and have a propensity towards exemplar-based extrapolation; we discuss the implications of these findings for machine-learning research on data augmentation, fairness, and systematic generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-dasgupta22b, title = {Distinguishing rule and exemplar-based generalization in learning systems}, author = {Dasgupta, Ishita and Grant, Erin and Griffiths, Tom}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {4816--4830}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/dasgupta22b/dasgupta22b.pdf}, url = {https://proceedings.mlr.press/v162/dasgupta22b.html}, abstract = {Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations. The trade-off between exemplar- and rule-based generalization has been studied extensively in cognitive psychology; in this work, we present a protocol inspired by these experimental approaches to probe the inductive biases that control this trade-off in category-learning systems such as artificial neural networks. We isolate two such inductive biases: feature-level bias (differences in which features are more readily learned) and exemplar-vs-rule bias (differences in how these learned features are used for generalization of category labels). We find that standard neural network models are feature-biased and have a propensity towards exemplar-based extrapolation; we discuss the implications of these findings for machine-learning research on data augmentation, fairness, and systematic generalization.} }
Endnote
%0 Conference Paper %T Distinguishing rule and exemplar-based generalization in learning systems %A Ishita Dasgupta %A Erin Grant %A Tom Griffiths %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-dasgupta22b %I PMLR %P 4816--4830 %U https://proceedings.mlr.press/v162/dasgupta22b.html %V 162 %X Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations. The trade-off between exemplar- and rule-based generalization has been studied extensively in cognitive psychology; in this work, we present a protocol inspired by these experimental approaches to probe the inductive biases that control this trade-off in category-learning systems such as artificial neural networks. We isolate two such inductive biases: feature-level bias (differences in which features are more readily learned) and exemplar-vs-rule bias (differences in how these learned features are used for generalization of category labels). We find that standard neural network models are feature-biased and have a propensity towards exemplar-based extrapolation; we discuss the implications of these findings for machine-learning research on data augmentation, fairness, and systematic generalization.
APA
Dasgupta, I., Grant, E. & Griffiths, T.. (2022). Distinguishing rule and exemplar-based generalization in learning systems. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:4816-4830 Available from https://proceedings.mlr.press/v162/dasgupta22b.html.

Related Material