Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression

Nicole Kramer, Masashi Sugiyama, Mikio Braun
Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, PMLR 5:288-295, 2009.

Abstract

The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-kramer09a, title = {Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression}, author = {Kramer, Nicole and Sugiyama, Masashi and Braun, Mikio}, booktitle = {Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics}, pages = {288--295}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/kramer09a/kramer09a.pdf}, url = {https://proceedings.mlr.press/v5/kramer09a.html}, abstract = {The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.} }
Endnote
%0 Conference Paper %T Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression %A Nicole Kramer %A Masashi Sugiyama %A Mikio Braun %B Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-kramer09a %I PMLR %P 288--295 %U https://proceedings.mlr.press/v5/kramer09a.html %V 5 %X The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.
RIS
TY - CPAPER TI - Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression AU - Nicole Kramer AU - Masashi Sugiyama AU - Mikio Braun BT - Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-kramer09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 288 EP - 295 L1 - http://proceedings.mlr.press/v5/kramer09a/kramer09a.pdf UR - https://proceedings.mlr.press/v5/kramer09a.html AB - The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime. ER -
APA
Kramer, N., Sugiyama, M. & Braun, M.. (2009). Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression. Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:288-295 Available from https://proceedings.mlr.press/v5/kramer09a.html.

Related Material