Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning

Sattar Vakili, Jonathan Scarlett, Da-Shan Shiu, Alberto Bernacchia
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:21960-21983, 2022.

Abstract

Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization. It is well known that a major downside for kernel-based models is the high computational cost; given a dataset of $n$ samples, the cost grows as $\mathcal{O}(n^3)$. Existing sparse approximation methods can yield a significant reduction in the computational cost, effectively reducing the actual cost down to as low as $\mathcal{O}(n)$ in certain cases. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nyström method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. Our confidence intervals lead to improved performance bounds in both regression and optimization problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-vakili22a, title = {Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning}, author = {Vakili, Sattar and Scarlett, Jonathan and Shiu, Da-Shan and Bernacchia, Alberto}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {21960--21983}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/vakili22a/vakili22a.pdf}, url = {https://proceedings.mlr.press/v162/vakili22a.html}, abstract = {Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization. It is well known that a major downside for kernel-based models is the high computational cost; given a dataset of $n$ samples, the cost grows as $\mathcal{O}(n^3)$. Existing sparse approximation methods can yield a significant reduction in the computational cost, effectively reducing the actual cost down to as low as $\mathcal{O}(n)$ in certain cases. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nyström method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. Our confidence intervals lead to improved performance bounds in both regression and optimization problems.} }
Endnote
%0 Conference Paper %T Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning %A Sattar Vakili %A Jonathan Scarlett %A Da-Shan Shiu %A Alberto Bernacchia %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-vakili22a %I PMLR %P 21960--21983 %U https://proceedings.mlr.press/v162/vakili22a.html %V 162 %X Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization. It is well known that a major downside for kernel-based models is the high computational cost; given a dataset of $n$ samples, the cost grows as $\mathcal{O}(n^3)$. Existing sparse approximation methods can yield a significant reduction in the computational cost, effectively reducing the actual cost down to as low as $\mathcal{O}(n)$ in certain cases. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nyström method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. Our confidence intervals lead to improved performance bounds in both regression and optimization problems.
APA
Vakili, S., Scarlett, J., Shiu, D. & Bernacchia, A.. (2022). Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:21960-21983 Available from https://proceedings.mlr.press/v162/vakili22a.html.

Related Material