Small Data, Big Decisions: Model Selection in the Small-Data Regime

Jorg Bornschein, Francesco Visin, Simon Osindero
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1035-1044, 2020.

Abstract

Highly overparametrized neural networks can display curiously strong generalization performance – a phenomenon that has recently garnered a wealth of theoretical and empirical research in order to better understand it. In contrast to most previous work, which typically considers the performance as a function of the model size, in this paper we empirically study the generalization performance as the size of the training set varies over multiple orders of magnitude. These systematic experiments lead to some interesting and potentially very useful observations; perhaps most notably that training on smaller subsets of the data can lead to more reliable model selection decisions whilst simultaneously enjoying smaller computational overheads. Our experiments furthermore allow us to estimate Minimum Description Lengths for common datasets given modern neural network architectures, thereby paving the way for principled model selection taking into account Occams-razor.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-bornschein20a, title = {Small Data, Big Decisions: Model Selection in the Small-Data Regime}, author = {Bornschein, Jorg and Visin, Francesco and Osindero, Simon}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1035--1044}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/bornschein20a/bornschein20a.pdf}, url = {https://proceedings.mlr.press/v119/bornschein20a.html}, abstract = {Highly overparametrized neural networks can display curiously strong generalization performance – a phenomenon that has recently garnered a wealth of theoretical and empirical research in order to better understand it. In contrast to most previous work, which typically considers the performance as a function of the model size, in this paper we empirically study the generalization performance as the size of the training set varies over multiple orders of magnitude. These systematic experiments lead to some interesting and potentially very useful observations; perhaps most notably that training on smaller subsets of the data can lead to more reliable model selection decisions whilst simultaneously enjoying smaller computational overheads. Our experiments furthermore allow us to estimate Minimum Description Lengths for common datasets given modern neural network architectures, thereby paving the way for principled model selection taking into account Occams-razor.} }
Endnote
%0 Conference Paper %T Small Data, Big Decisions: Model Selection in the Small-Data Regime %A Jorg Bornschein %A Francesco Visin %A Simon Osindero %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-bornschein20a %I PMLR %P 1035--1044 %U https://proceedings.mlr.press/v119/bornschein20a.html %V 119 %X Highly overparametrized neural networks can display curiously strong generalization performance – a phenomenon that has recently garnered a wealth of theoretical and empirical research in order to better understand it. In contrast to most previous work, which typically considers the performance as a function of the model size, in this paper we empirically study the generalization performance as the size of the training set varies over multiple orders of magnitude. These systematic experiments lead to some interesting and potentially very useful observations; perhaps most notably that training on smaller subsets of the data can lead to more reliable model selection decisions whilst simultaneously enjoying smaller computational overheads. Our experiments furthermore allow us to estimate Minimum Description Lengths for common datasets given modern neural network architectures, thereby paving the way for principled model selection taking into account Occams-razor.
APA
Bornschein, J., Visin, F. & Osindero, S.. (2020). Small Data, Big Decisions: Model Selection in the Small-Data Regime. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1035-1044 Available from https://proceedings.mlr.press/v119/bornschein20a.html.

Related Material