Sobolev Norm Learning Rates for Conditional Mean Embeddings

Prem Talwai, Ali Shameli, David Simchi-Levi
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:10422-10447, 2022.

Abstract

We develop novel learning rates for conditional mean embeddings by applying the theory of interpolation for reproducing kernel Hilbert spaces (RKHS). We derive explicit, adaptive convergence rates for the sample estimator under the misspecifed setting, where the target operator is not Hilbert-Schmidt or bounded with respect to the input/output RKHSs. We demonstrate that in certain parameter regimes, we can achieve uniform convergence rates in the output RKHS. We hope our analyses will allow the much broader application of conditional mean embeddings to more complex ML/RL settings involving infinite dimensional RKHSs and continuous state spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-talwai22a, title = { Sobolev Norm Learning Rates for Conditional Mean Embeddings }, author = {Talwai, Prem and Shameli, Ali and Simchi-Levi, David}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {10422--10447}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/talwai22a/talwai22a.pdf}, url = {https://proceedings.mlr.press/v151/talwai22a.html}, abstract = { We develop novel learning rates for conditional mean embeddings by applying the theory of interpolation for reproducing kernel Hilbert spaces (RKHS). We derive explicit, adaptive convergence rates for the sample estimator under the misspecifed setting, where the target operator is not Hilbert-Schmidt or bounded with respect to the input/output RKHSs. We demonstrate that in certain parameter regimes, we can achieve uniform convergence rates in the output RKHS. We hope our analyses will allow the much broader application of conditional mean embeddings to more complex ML/RL settings involving infinite dimensional RKHSs and continuous state spaces. } }
Endnote
%0 Conference Paper %T Sobolev Norm Learning Rates for Conditional Mean Embeddings %A Prem Talwai %A Ali Shameli %A David Simchi-Levi %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-talwai22a %I PMLR %P 10422--10447 %U https://proceedings.mlr.press/v151/talwai22a.html %V 151 %X We develop novel learning rates for conditional mean embeddings by applying the theory of interpolation for reproducing kernel Hilbert spaces (RKHS). We derive explicit, adaptive convergence rates for the sample estimator under the misspecifed setting, where the target operator is not Hilbert-Schmidt or bounded with respect to the input/output RKHSs. We demonstrate that in certain parameter regimes, we can achieve uniform convergence rates in the output RKHS. We hope our analyses will allow the much broader application of conditional mean embeddings to more complex ML/RL settings involving infinite dimensional RKHSs and continuous state spaces.
APA
Talwai, P., Shameli, A. & Simchi-Levi, D.. (2022). Sobolev Norm Learning Rates for Conditional Mean Embeddings . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:10422-10447 Available from https://proceedings.mlr.press/v151/talwai22a.html.

Related Material