AxlePro: Momentum-Accelerated Batched Training of Kernel Machines

Yiming Zhang, Parthe Pandit
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:1666-1674, 2025.

Abstract

In this paper we derive a novel iterative algorithm for learning kernel machines. Our algorithm, $\textsf{AxlePro}$, extends the $\textsf{EigenPro}$ family of algorithms via momentum-based acceleration. $\textsf{AxlePro}$ can be applied to train kernel machines with arbitrary positive semidefinite kernels. We provide a convergence guarantee for the algorithm and demonstrate the speed-up of $\textsf{AxlePro}$ over competing algorithms via numerical experiments. Furthermore, we also derive a version of $\textsf{AxlePro}$ to train large kernel models over arbitrarily large datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-zhang25c, title = {AxlePro: Momentum-Accelerated Batched Training of Kernel Machines}, author = {Zhang, Yiming and Pandit, Parthe}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {1666--1674}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/zhang25c/zhang25c.pdf}, url = {https://proceedings.mlr.press/v258/zhang25c.html}, abstract = {In this paper we derive a novel iterative algorithm for learning kernel machines. Our algorithm, $\textsf{AxlePro}$, extends the $\textsf{EigenPro}$ family of algorithms via momentum-based acceleration. $\textsf{AxlePro}$ can be applied to train kernel machines with arbitrary positive semidefinite kernels. We provide a convergence guarantee for the algorithm and demonstrate the speed-up of $\textsf{AxlePro}$ over competing algorithms via numerical experiments. Furthermore, we also derive a version of $\textsf{AxlePro}$ to train large kernel models over arbitrarily large datasets.} }
Endnote
%0 Conference Paper %T AxlePro: Momentum-Accelerated Batched Training of Kernel Machines %A Yiming Zhang %A Parthe Pandit %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-zhang25c %I PMLR %P 1666--1674 %U https://proceedings.mlr.press/v258/zhang25c.html %V 258 %X In this paper we derive a novel iterative algorithm for learning kernel machines. Our algorithm, $\textsf{AxlePro}$, extends the $\textsf{EigenPro}$ family of algorithms via momentum-based acceleration. $\textsf{AxlePro}$ can be applied to train kernel machines with arbitrary positive semidefinite kernels. We provide a convergence guarantee for the algorithm and demonstrate the speed-up of $\textsf{AxlePro}$ over competing algorithms via numerical experiments. Furthermore, we also derive a version of $\textsf{AxlePro}$ to train large kernel models over arbitrarily large datasets.
APA
Zhang, Y. & Pandit, P.. (2025). AxlePro: Momentum-Accelerated Batched Training of Kernel Machines. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:1666-1674 Available from https://proceedings.mlr.press/v258/zhang25c.html.

Related Material