[edit]
Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning
NeurIPS 2020 Workshop on Pre-registration in Machine Learning, PMLR 148:100-110, 2021.
Abstract
Hebbian meta-learning has recently shown promise to solve hard reinforcement learning problems, allowing agents to adapt to some degree to changes in the environment. However, because each synapse in these approaches can learn a very specific learning rule, the ability to generalize to very different situations is likely reduced. We hypothesize that limiting the number of Hebbian learning rules through a “genomic bottleneck” can act as a regularizer leading to better generalization across changes to the environment. We test this hypothesis by decoupling the number of Hebbian learning rules from the number of synapses and systematically varying the number of Hebbian learning rules. The results in this paper suggest that simultaneously learning the Hebbian learning rules and their assignment to synapses is a difficult optimization problem, leading to poor performance in the environments tested. However, parallel research to ours finds that it is indeed possible to reduce the number of learning rules by clustering similar rules together. How to best implement a “genomic bottleneck” algorithm is thus an important research direction that warrants further investigation.