Unit-level surprise in neural networks

Cian Eastwood, Ian Mason, Christopher K. I. Williams
Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops, PMLR 163:33-40, 2022.

Abstract

To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v163-eastwood22a, title = {Unit-level surprise in neural networks}, author = {Eastwood, Cian and Mason, Ian and Williams, Christopher K. I.}, booktitle = {Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops}, pages = {33--40}, year = {2022}, editor = {Pradier, Melanie F. and Schein, Aaron and Hyland, Stephanie and Ruiz, Francisco J. R. and Forde, Jessica Z.}, volume = {163}, series = {Proceedings of Machine Learning Research}, month = {13 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v163/eastwood22a/eastwood22a.pdf}, url = {https://proceedings.mlr.press/v163/eastwood22a.html}, abstract = {To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.} }
Endnote
%0 Conference Paper %T Unit-level surprise in neural networks %A Cian Eastwood %A Ian Mason %A Christopher K. I. Williams %B Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops %C Proceedings of Machine Learning Research %D 2022 %E Melanie F. Pradier %E Aaron Schein %E Stephanie Hyland %E Francisco J. R. Ruiz %E Jessica Z. Forde %F pmlr-v163-eastwood22a %I PMLR %P 33--40 %U https://proceedings.mlr.press/v163/eastwood22a.html %V 163 %X To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.
APA
Eastwood, C., Mason, I. & Williams, C.K.I.. (2022). Unit-level surprise in neural networks. Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops, in Proceedings of Machine Learning Research 163:33-40 Available from https://proceedings.mlr.press/v163/eastwood22a.html.

Related Material