Continual Reinforcement Learning with Complex Synapses

Christos Kaplanis, Murray Shanahan, Claudia Clopath
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2497-2506, 2018.

Abstract

Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-kaplanis18a, title = {Continual Reinforcement Learning with Complex Synapses}, author = {Kaplanis, Christos and Shanahan, Murray and Clopath, Claudia}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2497--2506}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/kaplanis18a/kaplanis18a.pdf}, url = {https://proceedings.mlr.press/v80/kaplanis18a.html}, abstract = {Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database.} }
Endnote
%0 Conference Paper %T Continual Reinforcement Learning with Complex Synapses %A Christos Kaplanis %A Murray Shanahan %A Claudia Clopath %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-kaplanis18a %I PMLR %P 2497--2506 %U https://proceedings.mlr.press/v80/kaplanis18a.html %V 80 %X Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database.
APA
Kaplanis, C., Shanahan, M. & Clopath, C.. (2018). Continual Reinforcement Learning with Complex Synapses. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2497-2506 Available from https://proceedings.mlr.press/v80/kaplanis18a.html.

Related Material