Amortized Variational Inference with Graph Convolutional Networks for Gaussian Processes
[edit]
Proceedings of Machine Learning Research, PMLR 89:22912300, 2019.
Abstract
GP Inference on large datasets is computationally expensive, especially when the observation likelihood is nonGaussian. To reduce the computation, many recent variational inference methods define the variational distribution based on a small number of inducing points. These methods have a hard tradeoff between distribution flexibility and computational efficiency. In this paper, we focus on the approximation of GP posterior at a local level: we define a reusable template to approximate the posterior at neighborhoods while maintaining a global approximation. We first construct a variational distribution such that the inference for a data point considers only its neighborhood, thereby separating the calculation for each data point. We then train Graph Convolutional Networks as a reusable model to run inference for each data point. Comparing to previous methods, our method greatly reduces the number of parameters and also the number of optimization iterations. In empirical evaluations, the proposed method significantly speeds up the inference and often gets more accurate results than competing methods.
Related Material


