Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers

Alex Lamb, Anirudh Goyal, Agnieszka Słowik, Michael Mozer, Philippe Beaudoin, Yoshua Bengio
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:919-927, 2021.

Abstract

Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer. A downside to this approach is that each layer (or module, as multiple modules can operate in parallel) is tasked with processing the entire hidden state, rather than a particular part of the state which is most relevant for that module. Methods which only operate on a small number of input variables are an essential part of most programming languages, and they allow for improved modularity and code re-usability. Our proposed method, Neural Function Modules (NFM), aims to introduce the same structural capability into deep learning. Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems. The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm which, as we show, improves the results in standard classification, out-of-domain generalization, generative modeling, and learning representations in the context of reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-lamb21a, title = { Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers }, author = {Lamb, Alex and Goyal, Anirudh and S\l{}owik, Agnieszka and Mozer, Michael and Beaudoin, Philippe and Bengio, Yoshua}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {919--927}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/lamb21a/lamb21a.pdf}, url = {https://proceedings.mlr.press/v130/lamb21a.html}, abstract = { Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer. A downside to this approach is that each layer (or module, as multiple modules can operate in parallel) is tasked with processing the entire hidden state, rather than a particular part of the state which is most relevant for that module. Methods which only operate on a small number of input variables are an essential part of most programming languages, and they allow for improved modularity and code re-usability. Our proposed method, Neural Function Modules (NFM), aims to introduce the same structural capability into deep learning. Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems. The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm which, as we show, improves the results in standard classification, out-of-domain generalization, generative modeling, and learning representations in the context of reinforcement learning. } }
Endnote
%0 Conference Paper %T Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers %A Alex Lamb %A Anirudh Goyal %A Agnieszka Słowik %A Michael Mozer %A Philippe Beaudoin %A Yoshua Bengio %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-lamb21a %I PMLR %P 919--927 %U https://proceedings.mlr.press/v130/lamb21a.html %V 130 %X Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer. A downside to this approach is that each layer (or module, as multiple modules can operate in parallel) is tasked with processing the entire hidden state, rather than a particular part of the state which is most relevant for that module. Methods which only operate on a small number of input variables are an essential part of most programming languages, and they allow for improved modularity and code re-usability. Our proposed method, Neural Function Modules (NFM), aims to introduce the same structural capability into deep learning. Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems. The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm which, as we show, improves the results in standard classification, out-of-domain generalization, generative modeling, and learning representations in the context of reinforcement learning.
APA
Lamb, A., Goyal, A., Słowik, A., Mozer, M., Beaudoin, P. & Bengio, Y.. (2021). Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:919-927 Available from https://proceedings.mlr.press/v130/lamb21a.html.

Related Material