Nyström Kernel Mean Embeddings

Antoine Chatalic, Nicolas Schreuder, Lorenzo Rosasco, Alessandro Rudi
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:3006-3024, 2022.

Abstract

Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We propose an efficient approximation procedure based on the Nystr{ö}m method, which exploits a small random subset of the dataset. Our main result is an upper bound on the approximation error of this procedure. It yields sufficient conditions on the subsample size to obtain the standard (1/sqrt(n)) rate while reducing computational costs. We discuss applications of this result for the approximation of the maximum mean discrepancy and quadrature rules, and we illustrate our theoretical findings with numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-chatalic22a, title = {{N}ystr{ö}m Kernel Mean Embeddings}, author = {Chatalic, Antoine and Schreuder, Nicolas and Rosasco, Lorenzo and Rudi, Alessandro}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {3006--3024}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/chatalic22a/chatalic22a.pdf}, url = {https://proceedings.mlr.press/v162/chatalic22a.html}, abstract = {Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We propose an efficient approximation procedure based on the Nystr{ö}m method, which exploits a small random subset of the dataset. Our main result is an upper bound on the approximation error of this procedure. It yields sufficient conditions on the subsample size to obtain the standard (1/sqrt(n)) rate while reducing computational costs. We discuss applications of this result for the approximation of the maximum mean discrepancy and quadrature rules, and we illustrate our theoretical findings with numerical experiments.} }
Endnote
%0 Conference Paper %T Nyström Kernel Mean Embeddings %A Antoine Chatalic %A Nicolas Schreuder %A Lorenzo Rosasco %A Alessandro Rudi %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-chatalic22a %I PMLR %P 3006--3024 %U https://proceedings.mlr.press/v162/chatalic22a.html %V 162 %X Kernel mean embeddings are a powerful tool to represent probability distributions over arbitrary spaces as single points in a Hilbert space. Yet, the cost of computing and storing such embeddings prohibits their direct use in large-scale settings. We propose an efficient approximation procedure based on the Nystr{ö}m method, which exploits a small random subset of the dataset. Our main result is an upper bound on the approximation error of this procedure. It yields sufficient conditions on the subsample size to obtain the standard (1/sqrt(n)) rate while reducing computational costs. We discuss applications of this result for the approximation of the maximum mean discrepancy and quadrature rules, and we illustrate our theoretical findings with numerical experiments.
APA
Chatalic, A., Schreuder, N., Rosasco, L. & Rudi, A.. (2022). Nyström Kernel Mean Embeddings. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:3006-3024 Available from https://proceedings.mlr.press/v162/chatalic22a.html.

Related Material