Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5271-5280, 2019.
Abstract
There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical. This motivates the need for few-shot neural data structures. In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks.
@InProceedings{pmlr-v97-rae19a,
title = {Meta-Learning Neural Bloom Filters},
author = {Rae, Jack and Bartunov, Sergey and Lillicrap, Timothy},
booktitle = {Proceedings of the 36th International Conference on Machine Learning},
pages = {5271--5280},
year = {2019},
editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
volume = {97},
series = {Proceedings of Machine Learning Research},
address = {Long Beach, California, USA},
month = {09--15 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v97/rae19a/rae19a.pdf},
url = {http://proceedings.mlr.press/v97/rae19a.html},
abstract = {There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical. This motivates the need for few-shot neural data structures. In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks.}
}
%0 Conference Paper
%T Meta-Learning Neural Bloom Filters
%A Jack Rae
%A Sergey Bartunov
%A Timothy Lillicrap
%B Proceedings of the 36th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2019
%E Kamalika Chaudhuri
%E Ruslan Salakhutdinov
%F pmlr-v97-rae19a
%I PMLR
%J Proceedings of Machine Learning Research
%P 5271--5280
%U http://proceedings.mlr.press
%V 97
%W PMLR
%X There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression. In this setting, a neural data structure is instantiated by training a network over many epochs of its inputs until convergence. In applications where inputs arrive at high throughput, or are ephemeral, training a network from scratch is not practical. This motivates the need for few-shot neural data structures. In this paper we explore the learning of approximate set membership over a set of data in one-shot via meta-learning. We propose a novel memory architecture, the Neural Bloom Filter, which is able to achieve significant compression gains over classical Bloom Filters and existing memory-augmented neural networks.
Rae, J., Bartunov, S. & Lillicrap, T.. (2019). Meta-Learning Neural Bloom Filters. Proceedings of the 36th International Conference on Machine Learning, in PMLR 97:5271-5280
This site last compiled Mon, 16 Sep 2019 16:05:04 +0000