Meta-Learning the Inductive Bias of Simple Neural Circuits

Will Dorrell, Maria Yuffa, Peter E. Latham
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:8389-8402, 2023.

Abstract

Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam’s razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals’ neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating our intended use: understanding the role of circuit features through the resulting inductive bias.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-dorrell23a, title = {Meta-Learning the Inductive Bias of Simple Neural Circuits}, author = {Dorrell, Will and Yuffa, Maria and Latham, Peter E.}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {8389--8402}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/dorrell23a/dorrell23a.pdf}, url = {https://proceedings.mlr.press/v202/dorrell23a.html}, abstract = {Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam’s razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals’ neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating our intended use: understanding the role of circuit features through the resulting inductive bias.} }
Endnote
%0 Conference Paper %T Meta-Learning the Inductive Bias of Simple Neural Circuits %A Will Dorrell %A Maria Yuffa %A Peter E. Latham %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-dorrell23a %I PMLR %P 8389--8402 %U https://proceedings.mlr.press/v202/dorrell23a.html %V 202 %X Training data is always finite, making it unclear how to generalise to unseen situations. But, animals do generalise, wielding Occam’s razor to select a parsimonious explanation of their observations. How they do this is called their inductive bias, and it is implicitly built into the operation of animals’ neural circuits. This relationship between an observed circuit and its inductive bias is a useful explanatory window for neuroscience, allowing design choices to be understood normatively. However, it is generally very difficult to map circuit structure to inductive bias. Here, we present a neural network tool to bridge this gap. The tool meta-learns the inductive bias by learning functions that a neural circuit finds easy to generalise, since easy-to-generalise functions are exactly those the circuit chooses to explain incomplete data. In systems with analytically known inductive bias, i.e. linear and kernel regression, our tool recovers it. Generally, we show it can flexibly extract inductive biases from supervised learners, including spiking neural networks, and show how it could be applied to real animals. Finally, we use our tool to interpret recent connectomic data illustrating our intended use: understanding the role of circuit features through the resulting inductive bias.
APA
Dorrell, W., Yuffa, M. & Latham, P.E.. (2023). Meta-Learning the Inductive Bias of Simple Neural Circuits. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:8389-8402 Available from https://proceedings.mlr.press/v202/dorrell23a.html.

Related Material