Hybrid Models with Deep and Invertible Features

Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4723-4732, 2019.

Abstract

We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets|features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Yet the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-nalisnick19b, title = {Hybrid Models with Deep and Invertible Features}, author = {Nalisnick, Eric and Matsukawa, Akihiro and Teh, Yee Whye and Gorur, Dilan and Lakshminarayanan, Balaji}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4723--4732}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/nalisnick19b/nalisnick19b.pdf}, url = {https://proceedings.mlr.press/v97/nalisnick19b.html}, abstract = {We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets|features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Yet the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.} }
Endnote
%0 Conference Paper %T Hybrid Models with Deep and Invertible Features %A Eric Nalisnick %A Akihiro Matsukawa %A Yee Whye Teh %A Dilan Gorur %A Balaji Lakshminarayanan %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-nalisnick19b %I PMLR %P 4723--4732 %U https://proceedings.mlr.press/v97/nalisnick19b.html %V 97 %X We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets|features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Yet the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.
APA
Nalisnick, E., Matsukawa, A., Teh, Y.W., Gorur, D. & Lakshminarayanan, B.. (2019). Hybrid Models with Deep and Invertible Features. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4723-4732 Available from https://proceedings.mlr.press/v97/nalisnick19b.html.

Related Material