[edit]
Unit-level surprise in neural networks
Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops, PMLR 163:33-40, 2022.
Abstract
To adapt to changes in real-world data distributions, neural networks must update their parameters. We argue that unit-level surprise should be useful for: (i) determining which few parameters should update to adapt quickly; and (ii) learning a modularization such that few modules need be adapted to transfer. We empirically validate (i) in simple settings and reflect on the challenges and opportunities of realizing both (i) and (ii) in more general settings.