On the challenges of learning with inference networks on sparse, high-dimensional data

[edit]

Rahul Krishnan, Dawen Liang, Matthew Hoffman ;
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:143-151, 2018.

Abstract

We study parameter estimation in Nonlinear Factor Analysis (NFA) where the generative model is parameterized by a deep neural network. Recent work has focused on learning such models using inference (or recognition) networks; we identify a crucial problem when modeling large, sparse, high-dimensional datasets – underfitting. We study the extent of underfitting, highlighting that its severity increases with the sparsity of the data. We propose methods to tackle it via iterative optimization inspired by stochastic variational inference (Hoffman et al., 2013) and improvements in the data representation used for inference. The proposed techniques drastically improve the ability of these powerful models to fit sparse data, achieving state-of-the-art results on a benchmark text-count dataset and excellent results on the task of top-N recommendation.

Related Material