Learning Representations that Support Extrapolation

Taylor Webb, Zachary Dulberg, Steven Frankland, Alexander Petrov, Randall O’Reilly, Jonathan Cohen
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10136-10146, 2020.

Abstract

Extrapolation – the ability to make inferences that go beyond the scope of one’s experiences – is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-webb20a, title = {Learning Representations that Support Extrapolation}, author = {Webb, Taylor and Dulberg, Zachary and Frankland, Steven and Petrov, Alexander and O'Reilly, Randall and Cohen, Jonathan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10136--10146}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/webb20a/webb20a.pdf}, url = {https://proceedings.mlr.press/v119/webb20a.html}, abstract = {Extrapolation – the ability to make inferences that go beyond the scope of one’s experiences – is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.} }
Endnote
%0 Conference Paper %T Learning Representations that Support Extrapolation %A Taylor Webb %A Zachary Dulberg %A Steven Frankland %A Alexander Petrov %A Randall O’Reilly %A Jonathan Cohen %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-webb20a %I PMLR %P 10136--10146 %U https://proceedings.mlr.press/v119/webb20a.html %V 119 %X Extrapolation – the ability to make inferences that go beyond the scope of one’s experiences – is a hallmark of human intelligence. By contrast, the generalization exhibited by contemporary neural network algorithms is largely limited to interpolation between data points in their training corpora. In this paper, we consider the challenge of learning representations that support extrapolation. We introduce a novel visual analogy benchmark that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data. We also introduce a simple technique, temporal context normalization, that encourages representations that emphasize the relations between objects. We find that this technique enables a significant improvement in the ability to extrapolate, considerably outperforming a number of competitive techniques.
APA
Webb, T., Dulberg, Z., Frankland, S., Petrov, A., O’Reilly, R. & Cohen, J.. (2020). Learning Representations that Support Extrapolation. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10136-10146 Available from https://proceedings.mlr.press/v119/webb20a.html.

Related Material