Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning

Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Khan Mohammad Emtiyaz
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:4563-4573, 2021.

Abstract

Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplace’s method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-immer21a, title = {Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning}, author = {Immer, Alexander and Bauer, Matthias and Fortuin, Vincent and R{\"a}tsch, Gunnar and Emtiyaz, Khan Mohammad}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {4563--4573}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/immer21a/immer21a.pdf}, url = {https://proceedings.mlr.press/v139/immer21a.html}, abstract = {Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplace’s method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).} }
Endnote
%0 Conference Paper %T Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning %A Alexander Immer %A Matthias Bauer %A Vincent Fortuin %A Gunnar Rätsch %A Khan Mohammad Emtiyaz %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-immer21a %I PMLR %P 4563--4573 %U https://proceedings.mlr.press/v139/immer21a.html %V 139 %X Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties. Instead, most approaches rely on validation data, which may not be readily available. In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone. Some hyperparameters can be estimated online during training, simplifying the procedure. Our marginal-likelihood estimate is based on Laplace’s method and Gauss-Newton approximations to the Hessian, and it outperforms cross-validation and manual tuning on standard regression and image classification datasets, especially in terms of calibration and out-of-distribution detection. Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable (e.g., in nonstationary settings).
APA
Immer, A., Bauer, M., Fortuin, V., Rätsch, G. & Emtiyaz, K.M.. (2021). Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:4563-4573 Available from https://proceedings.mlr.press/v139/immer21a.html.

Related Material