Mean Nyström Embeddings for Adaptive Compressive Learning

Antoine Chatalic, Luigi Carratino, Ernesto De Vito, Lorenzo Rosasco
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9869-9889, 2022.

Abstract

Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i.e. a vector of generalized moments. The learning task is then approximately solved as an inverse problem using an adapted parametric model. Previous works in this context have focused on sketches obtained by averaging random features, that while universal can be poorly adapted to the problem at hand. In this paper, we propose and study the idea of performing sketching based on data-dependent Nyström approximation. From a theoretical perspective we prove that the excess risk can be controlled under a geometric assumption relating the parametric model used to learn from the sketch and the covariance operator associated to the task at hand. Empirically, we show for k-means clustering and Gaussian modeling that for a fixed sketch size, Nyström sketches indeed outperform those built with random features.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-chatalic22a, title = { Mean Nyström Embeddings for Adaptive Compressive Learning }, author = {Chatalic, Antoine and Carratino, Luigi and De Vito, Ernesto and Rosasco, Lorenzo}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {9869--9889}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/chatalic22a/chatalic22a.pdf}, url = {https://proceedings.mlr.press/v151/chatalic22a.html}, abstract = { Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i.e. a vector of generalized moments. The learning task is then approximately solved as an inverse problem using an adapted parametric model. Previous works in this context have focused on sketches obtained by averaging random features, that while universal can be poorly adapted to the problem at hand. In this paper, we propose and study the idea of performing sketching based on data-dependent Nyström approximation. From a theoretical perspective we prove that the excess risk can be controlled under a geometric assumption relating the parametric model used to learn from the sketch and the covariance operator associated to the task at hand. Empirically, we show for k-means clustering and Gaussian modeling that for a fixed sketch size, Nyström sketches indeed outperform those built with random features. } }
Endnote
%0 Conference Paper %T Mean Nyström Embeddings for Adaptive Compressive Learning %A Antoine Chatalic %A Luigi Carratino %A Ernesto De Vito %A Lorenzo Rosasco %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-chatalic22a %I PMLR %P 9869--9889 %U https://proceedings.mlr.press/v151/chatalic22a.html %V 151 %X Compressive learning is an approach to efficient large scale learning based on sketching an entire dataset to a single mean embedding (the sketch), i.e. a vector of generalized moments. The learning task is then approximately solved as an inverse problem using an adapted parametric model. Previous works in this context have focused on sketches obtained by averaging random features, that while universal can be poorly adapted to the problem at hand. In this paper, we propose and study the idea of performing sketching based on data-dependent Nyström approximation. From a theoretical perspective we prove that the excess risk can be controlled under a geometric assumption relating the parametric model used to learn from the sketch and the covariance operator associated to the task at hand. Empirically, we show for k-means clustering and Gaussian modeling that for a fixed sketch size, Nyström sketches indeed outperform those built with random features.
APA
Chatalic, A., Carratino, L., De Vito, E. & Rosasco, L.. (2022). Mean Nyström Embeddings for Adaptive Compressive Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:9869-9889 Available from https://proceedings.mlr.press/v151/chatalic22a.html.

Related Material