Meta-Learning Neural Bloom Filters

Jack Rae, Sergey Bartunov, Timothy Lillicrap
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5271-5280, 2019.

Abstract

There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical. This motivates the need for few-shot neural data structures. In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-rae19a, title = {Meta-Learning Neural Bloom Filters}, author = {Rae, Jack and Bartunov, Sergey and Lillicrap, Timothy}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5271--5280}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/rae19a/rae19a.pdf}, url = {https://proceedings.mlr.press/v97/rae19a.html}, abstract = {There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical. This motivates the need for few-shot neural data structures. In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks.} }
Endnote
%0 Conference Paper %T Meta-Learning Neural Bloom Filters %A Jack Rae %A Sergey Bartunov %A Timothy Lillicrap %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-rae19a %I PMLR %P 5271--5280 %U https://proceedings.mlr.press/v97/rae19a.html %V 97 %X There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical. This motivates the need for few-shot neural data structures. In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks.
APA
Rae, J., Bartunov, S. & Lillicrap, T.. (2019). Meta-Learning Neural Bloom Filters. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5271-5280 Available from https://proceedings.mlr.press/v97/rae19a.html.

Related Material