Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression

Nicole Kramer, Masashi Sugiyama, Mikio Braun
; Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:288-295, 2009.

Abstract

The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-kramer09a, title = {Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression}, author = {Nicole Kramer and Masashi Sugiyama and Mikio Braun}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {288--295}, year = {2009}, editor = {David van Dyk and Max Welling}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/kramer09a/kramer09a.pdf}, url = {http://proceedings.mlr.press/v5/kramer09a.html}, abstract = {The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.} }
Endnote
%0 Conference Paper %T Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression %A Nicole Kramer %A Masashi Sugiyama %A Mikio Braun %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-kramer09a %I PMLR %J Proceedings of Machine Learning Research %P 288--295 %U http://proceedings.mlr.press %V 5 %W PMLR %X The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime.
RIS
TY - CPAPER TI - Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression AU - Nicole Kramer AU - Masashi Sugiyama AU - Mikio Braun BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics PY - 2009/04/15 DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-kramer09a PB - PMLR SP - 288 DP - PMLR EP - 295 L1 - http://proceedings.mlr.press/v5/kramer09a/kramer09a.pdf UR - http://proceedings.mlr.press/v5/kramer09a.html AB - The runtime for Kernel Partial Least Squares (KPLS) to compute the fit is quadratic in the number of examples. However, the necessity of obtaining sensitivity measures as degrees of freedom for model selection or confidence intervals for more detailed analysis requires cubic runtime, and thus constitutes a computational bottleneck in real-world data analysis. We propose a novel algorithm for KPLS which not only computes (a) the fit, but also (b) its approximate degrees of freedom and (c) error bars in quadratic runtime. The algorithm exploits a close connection between Kernel PLS and the Lanczos algorithm for approximating the eigenvalues of symmetric matrices, and uses this approximation to compute the trace of powers of the kernel matrix in quadratic runtime. ER -
APA
Kramer, N., Sugiyama, M. & Braun, M.. (2009). Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in PMLR 5:288-295

Related Material