Differentiable Programs with Neural Libraries

Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1213-1222, 2017.

Abstract

We develop a framework for combining differentiable programming languages with neural networks. Using this framework we create end-to-end trainable systems that learn to write interpretable algorithms with perceptual components. We explore the benefits of inductive biases for strong generalization and modularity that come from the program-like structure of our models. In particular, modularity allows us to learn a library of (neural) functions which grows and improves as more tasks are solved. Empirically, we show that this leads to lifelong learning systems that transfer knowledge to new tasks more effectively than baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-gaunt17a, title = {Differentiable Programs with Neural Libraries}, author = {Alexander L. Gaunt and Marc Brockschmidt and Nate Kushman and Daniel Tarlow}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1213--1222}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/gaunt17a/gaunt17a.pdf}, url = {https://proceedings.mlr.press/v70/gaunt17a.html}, abstract = {We develop a framework for combining differentiable programming languages with neural networks. Using this framework we create end-to-end trainable systems that learn to write interpretable algorithms with perceptual components. We explore the benefits of inductive biases for strong generalization and modularity that come from the program-like structure of our models. In particular, modularity allows us to learn a library of (neural) functions which grows and improves as more tasks are solved. Empirically, we show that this leads to lifelong learning systems that transfer knowledge to new tasks more effectively than baselines.} }
Endnote
%0 Conference Paper %T Differentiable Programs with Neural Libraries %A Alexander L. Gaunt %A Marc Brockschmidt %A Nate Kushman %A Daniel Tarlow %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-gaunt17a %I PMLR %P 1213--1222 %U https://proceedings.mlr.press/v70/gaunt17a.html %V 70 %X We develop a framework for combining differentiable programming languages with neural networks. Using this framework we create end-to-end trainable systems that learn to write interpretable algorithms with perceptual components. We explore the benefits of inductive biases for strong generalization and modularity that come from the program-like structure of our models. In particular, modularity allows us to learn a library of (neural) functions which grows and improves as more tasks are solved. Empirically, we show that this leads to lifelong learning systems that transfer knowledge to new tasks more effectively than baselines.
APA
Gaunt, A.L., Brockschmidt, M., Kushman, N. & Tarlow, D.. (2017). Differentiable Programs with Neural Libraries. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1213-1222 Available from https://proceedings.mlr.press/v70/gaunt17a.html.

Related Material