Continual Learning via Sequential Function-Space Variational Inference

Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18871-18887, 2022.

Abstract

Sequential Bayesian inference over predictive functions is a natural framework for continual learning from streams of data. However, applying it to neural networks has proved challenging in practice. Addressing the drawbacks of existing techniques, we propose an optimization objective derived by formulating continual learning as sequential function-space variational inference. In contrast to existing methods that regularize neural network parameters directly, this objective allows parameters to vary widely during training, enabling better adaptation to new tasks. Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions and more effective regularization. We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods while depending less on maintaining a set of representative points from previous tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-rudner22a, title = {Continual Learning via Sequential Function-Space Variational Inference}, author = {Rudner, Tim G. J. and Bickford Smith, Freddie and Feng, Qixuan and Teh, Yee Whye and Gal, Yarin}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18871--18887}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/rudner22a/rudner22a.pdf}, url = {https://proceedings.mlr.press/v162/rudner22a.html}, abstract = {Sequential Bayesian inference over predictive functions is a natural framework for continual learning from streams of data. However, applying it to neural networks has proved challenging in practice. Addressing the drawbacks of existing techniques, we propose an optimization objective derived by formulating continual learning as sequential function-space variational inference. In contrast to existing methods that regularize neural network parameters directly, this objective allows parameters to vary widely during training, enabling better adaptation to new tasks. Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions and more effective regularization. We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods while depending less on maintaining a set of representative points from previous tasks.} }
Endnote
%0 Conference Paper %T Continual Learning via Sequential Function-Space Variational Inference %A Tim G. J. Rudner %A Freddie Bickford Smith %A Qixuan Feng %A Yee Whye Teh %A Yarin Gal %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-rudner22a %I PMLR %P 18871--18887 %U https://proceedings.mlr.press/v162/rudner22a.html %V 162 %X Sequential Bayesian inference over predictive functions is a natural framework for continual learning from streams of data. However, applying it to neural networks has proved challenging in practice. Addressing the drawbacks of existing techniques, we propose an optimization objective derived by formulating continual learning as sequential function-space variational inference. In contrast to existing methods that regularize neural network parameters directly, this objective allows parameters to vary widely during training, enabling better adaptation to new tasks. Compared to objectives that directly regularize neural network predictions, the proposed objective allows for more flexible variational distributions and more effective regularization. We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods while depending less on maintaining a set of representative points from previous tasks.
APA
Rudner, T.G.J., Bickford Smith, F., Feng, Q., Teh, Y.W. & Gal, Y.. (2022). Continual Learning via Sequential Function-Space Variational Inference. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18871-18887 Available from https://proceedings.mlr.press/v162/rudner22a.html.

Related Material