Preconditioning Kernel Matrices

Kurt Cutajar, Michael Osborne, John Cunningham, Maurizio Filippone
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2529-2538, 2016.

Abstract

The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-cutajar16, title = {Preconditioning Kernel Matrices}, author = {Cutajar, Kurt and Osborne, Michael and Cunningham, John and Filippone, Maurizio}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2529--2538}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/cutajar16.pdf}, url = {https://proceedings.mlr.press/v48/cutajar16.html}, abstract = {The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget.} }
Endnote
%0 Conference Paper %T Preconditioning Kernel Matrices %A Kurt Cutajar %A Michael Osborne %A John Cunningham %A Maurizio Filippone %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-cutajar16 %I PMLR %P 2529--2538 %U https://proceedings.mlr.press/v48/cutajar16.html %V 48 %X The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget.
RIS
TY - CPAPER TI - Preconditioning Kernel Matrices AU - Kurt Cutajar AU - Michael Osborne AU - John Cunningham AU - Maurizio Filippone BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-cutajar16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2529 EP - 2538 L1 - http://proceedings.mlr.press/v48/cutajar16.pdf UR - https://proceedings.mlr.press/v48/cutajar16.html AB - The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget. ER -
APA
Cutajar, K., Osborne, M., Cunningham, J. & Filippone, M.. (2016). Preconditioning Kernel Matrices. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2529-2538 Available from https://proceedings.mlr.press/v48/cutajar16.html.

Related Material