Function-Space Regularization in Neural Networks: A Probabilistic Perspective

Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, Andrew Gordon Wilson
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:29275-29290, 2023.

Abstract

Parameter-space regularization in neural network optimization is a fundamental tool for improving generalization. However, standard parameter-space regularization methods make it challenging to encode explicit preferences about desired predictive functions into neural network training. In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training. This method—which we refer to as function-space empirical Bayes (FS-EB)—includes both parameter- and function-space regularization, is mathematically simple, easy to implement, and incurs only minimal computational overhead compared to standard regularization techniques. We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection, highly-calibrated predictive uncertainty estimates, successful task adaption from pre-trained models, and improved generalization under covariate shift.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-rudner23a, title = {Function-Space Regularization in Neural Networks: A Probabilistic Perspective}, author = {Rudner, Tim G. J. and Kapoor, Sanyam and Qiu, Shikai and Wilson, Andrew Gordon}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {29275--29290}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/rudner23a/rudner23a.pdf}, url = {https://proceedings.mlr.press/v202/rudner23a.html}, abstract = {Parameter-space regularization in neural network optimization is a fundamental tool for improving generalization. However, standard parameter-space regularization methods make it challenging to encode explicit preferences about desired predictive functions into neural network training. In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training. This method—which we refer to as function-space empirical Bayes (FS-EB)—includes both parameter- and function-space regularization, is mathematically simple, easy to implement, and incurs only minimal computational overhead compared to standard regularization techniques. We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection, highly-calibrated predictive uncertainty estimates, successful task adaption from pre-trained models, and improved generalization under covariate shift.} }
Endnote
%0 Conference Paper %T Function-Space Regularization in Neural Networks: A Probabilistic Perspective %A Tim G. J. Rudner %A Sanyam Kapoor %A Shikai Qiu %A Andrew Gordon Wilson %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-rudner23a %I PMLR %P 29275--29290 %U https://proceedings.mlr.press/v202/rudner23a.html %V 202 %X Parameter-space regularization in neural network optimization is a fundamental tool for improving generalization. However, standard parameter-space regularization methods make it challenging to encode explicit preferences about desired predictive functions into neural network training. In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training. This method—which we refer to as function-space empirical Bayes (FS-EB)—includes both parameter- and function-space regularization, is mathematically simple, easy to implement, and incurs only minimal computational overhead compared to standard regularization techniques. We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection, highly-calibrated predictive uncertainty estimates, successful task adaption from pre-trained models, and improved generalization under covariate shift.
APA
Rudner, T.G.J., Kapoor, S., Qiu, S. & Wilson, A.G.. (2023). Function-Space Regularization in Neural Networks: A Probabilistic Perspective. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:29275-29290 Available from https://proceedings.mlr.press/v202/rudner23a.html.

Related Material