Kernelized Synaptic Weight Matrices

Lorenz Muller, Julien Martel, Giacomo Indiveri
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3654-3663, 2018.

Abstract

In this paper we introduce a novel neural network architecture, in which weight matrices are re-parametrized in terms of low-dimensional vectors, interacting through kernel functions. A layer of our network can be interpreted as introducing a (potentially infinitely wide) linear layer between input and output. We describe the theory underpinning this model and validate it with concrete examples, exploring how it can be used to impose structure on neural networks in diverse applications ranging from data visualization to recommender systems. We achieve state-of-the-art performance in a collaborative filtering task (MovieLens).

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-muller18a, title = {Kernelized Synaptic Weight Matrices}, author = {Muller, Lorenz and Martel, Julien and Indiveri, Giacomo}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3654--3663}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/muller18a/muller18a.pdf}, url = {https://proceedings.mlr.press/v80/muller18a.html}, abstract = {In this paper we introduce a novel neural network architecture, in which weight matrices are re-parametrized in terms of low-dimensional vectors, interacting through kernel functions. A layer of our network can be interpreted as introducing a (potentially infinitely wide) linear layer between input and output. We describe the theory underpinning this model and validate it with concrete examples, exploring how it can be used to impose structure on neural networks in diverse applications ranging from data visualization to recommender systems. We achieve state-of-the-art performance in a collaborative filtering task (MovieLens).} }
Endnote
%0 Conference Paper %T Kernelized Synaptic Weight Matrices %A Lorenz Muller %A Julien Martel %A Giacomo Indiveri %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-muller18a %I PMLR %P 3654--3663 %U https://proceedings.mlr.press/v80/muller18a.html %V 80 %X In this paper we introduce a novel neural network architecture, in which weight matrices are re-parametrized in terms of low-dimensional vectors, interacting through kernel functions. A layer of our network can be interpreted as introducing a (potentially infinitely wide) linear layer between input and output. We describe the theory underpinning this model and validate it with concrete examples, exploring how it can be used to impose structure on neural networks in diverse applications ranging from data visualization to recommender systems. We achieve state-of-the-art performance in a collaborative filtering task (MovieLens).
APA
Muller, L., Martel, J. & Indiveri, G.. (2018). Kernelized Synaptic Weight Matrices. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3654-3663 Available from https://proceedings.mlr.press/v80/muller18a.html.

Related Material