SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes

Sanyam Kapoor, Marc Finzi, Ke Alexander Wang, Andrew Gordon Gordon Wilson
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:5279-5289, 2021.

Abstract

State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the co-variance kernel. The Structured Kernel Interpolation (SKI) framework accelerates these MVMs by performing efficient MVMs on a grid and interpolating back to the original space. In this work, we develop a connection between SKI and the permutohedral lattice used for high-dimensional fast bilateral filtering. Using a sparse simplicial grid instead of a dense rectangular one, we can perform GP inference exponentially faster in the dimension than SKI. Our approach, Simplex-GP, enables scaling SKI to high dimensions, while maintaining strong predictive performance. We additionally provide a CUDA implementation of Simplex-GP, which enables significant GPU acceleration of MVM based inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-kapoor21a, title = {SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes}, author = {Kapoor, Sanyam and Finzi, Marc and Wang, Ke Alexander and Wilson, Andrew Gordon Gordon}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {5279--5289}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/kapoor21a/kapoor21a.pdf}, url = {https://proceedings.mlr.press/v139/kapoor21a.html}, abstract = {State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the co-variance kernel. The Structured Kernel Interpolation (SKI) framework accelerates these MVMs by performing efficient MVMs on a grid and interpolating back to the original space. In this work, we develop a connection between SKI and the permutohedral lattice used for high-dimensional fast bilateral filtering. Using a sparse simplicial grid instead of a dense rectangular one, we can perform GP inference exponentially faster in the dimension than SKI. Our approach, Simplex-GP, enables scaling SKI to high dimensions, while maintaining strong predictive performance. We additionally provide a CUDA implementation of Simplex-GP, which enables significant GPU acceleration of MVM based inference.} }
Endnote
%0 Conference Paper %T SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes %A Sanyam Kapoor %A Marc Finzi %A Ke Alexander Wang %A Andrew Gordon Gordon Wilson %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-kapoor21a %I PMLR %P 5279--5289 %U https://proceedings.mlr.press/v139/kapoor21a.html %V 139 %X State-of-the-art methods for scalable Gaussian processes use iterative algorithms, requiring fast matrix vector multiplies (MVMs) with the co-variance kernel. The Structured Kernel Interpolation (SKI) framework accelerates these MVMs by performing efficient MVMs on a grid and interpolating back to the original space. In this work, we develop a connection between SKI and the permutohedral lattice used for high-dimensional fast bilateral filtering. Using a sparse simplicial grid instead of a dense rectangular one, we can perform GP inference exponentially faster in the dimension than SKI. Our approach, Simplex-GP, enables scaling SKI to high dimensions, while maintaining strong predictive performance. We additionally provide a CUDA implementation of Simplex-GP, which enables significant GPU acceleration of MVM based inference.
APA
Kapoor, S., Finzi, M., Wang, K.A. & Wilson, A.G.G.. (2021). SKIing on Simplices: Kernel Interpolation on the Permutohedral Lattice for Scalable Gaussian Processes. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:5279-5289 Available from https://proceedings.mlr.press/v139/kapoor21a.html.

Related Material