Hierarchically-partitioned Gaussian Process Approximation

Byung-Jun Lee, Jongmin Lee, Kee-Eung Kim
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:822-831, 2017.

Abstract

The Gaussian process (GP) is a simple yet powerful probabilistic framework for various machine learning tasks. However, exact algorithms for learning and prediction are prohibitive to be applied to large datasets due to inherent computational complexity. To overcome this main limitation, various techniques have been proposed, and in particular, local GP algorithms that scales “truly linearly” with respect to the dataset size. In this paper, we introduce a hierarchical model based on local GP for large-scale datasets, which stacks inducing points over inducing points in layers. By using different kernels in each layer, the overall model becomes multi-scale and is able to capture both long- and short-range dependencies. We demonstrate the effectiveness of our model by speed-accuracy performance on challenging real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-lee17a, title = {{Hierarchically-partitioned Gaussian Process Approximation}}, author = {Lee, Byung-Jun and Lee, Jongmin and Kim, Kee-Eung}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {822--831}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/lee17a/lee17a.pdf}, url = {https://proceedings.mlr.press/v54/lee17a.html}, abstract = {The Gaussian process (GP) is a simple yet powerful probabilistic framework for various machine learning tasks. However, exact algorithms for learning and prediction are prohibitive to be applied to large datasets due to inherent computational complexity. To overcome this main limitation, various techniques have been proposed, and in particular, local GP algorithms that scales “truly linearly” with respect to the dataset size. In this paper, we introduce a hierarchical model based on local GP for large-scale datasets, which stacks inducing points over inducing points in layers. By using different kernels in each layer, the overall model becomes multi-scale and is able to capture both long- and short-range dependencies. We demonstrate the effectiveness of our model by speed-accuracy performance on challenging real-world datasets.} }
Endnote
%0 Conference Paper %T Hierarchically-partitioned Gaussian Process Approximation %A Byung-Jun Lee %A Jongmin Lee %A Kee-Eung Kim %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-lee17a %I PMLR %P 822--831 %U https://proceedings.mlr.press/v54/lee17a.html %V 54 %X The Gaussian process (GP) is a simple yet powerful probabilistic framework for various machine learning tasks. However, exact algorithms for learning and prediction are prohibitive to be applied to large datasets due to inherent computational complexity. To overcome this main limitation, various techniques have been proposed, and in particular, local GP algorithms that scales “truly linearly” with respect to the dataset size. In this paper, we introduce a hierarchical model based on local GP for large-scale datasets, which stacks inducing points over inducing points in layers. By using different kernels in each layer, the overall model becomes multi-scale and is able to capture both long- and short-range dependencies. We demonstrate the effectiveness of our model by speed-accuracy performance on challenging real-world datasets.
APA
Lee, B., Lee, J. & Kim, K.. (2017). Hierarchically-partitioned Gaussian Process Approximation. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:822-831 Available from https://proceedings.mlr.press/v54/lee17a.html.

Related Material