Toward Large Kernel Models

Amirhesam Abedsoltan, Mikhail Belkin, Parthe Pandit
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:61-78, 2023.

Abstract

Recent studies indicate that kernel machines can often perform similarly or better than deep neural networks (DNNs) on small datasets. The interest in kernel machines has been additionally bolstered by the discovery of their equivalence to wide neural networks in certain regimes. However, a key feature of DNNs is their ability to scale the model size and training data size independently, whereas in traditional kernel machines model size is tied to data size. Because of this coupling, scaling kernel machines to large data has been computationally challenging. In this paper, we provide a way forward for constructing large-scale general kernel models, which are a generalization of kernel machines that decouples the model and data, allowing training on large datasets. Specifically, we introduce EigenPro 3.0, an algorithm based on projected dual preconditioned SGD and show scaling to model and data sizes which have not been possible with existing kernel methods. We provide a PyTorch based implementation which can take advantage of multiple GPUs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-abedsoltan23a, title = {Toward Large Kernel Models}, author = {Abedsoltan, Amirhesam and Belkin, Mikhail and Pandit, Parthe}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {61--78}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/abedsoltan23a/abedsoltan23a.pdf}, url = {https://proceedings.mlr.press/v202/abedsoltan23a.html}, abstract = {Recent studies indicate that kernel machines can often perform similarly or better than deep neural networks (DNNs) on small datasets. The interest in kernel machines has been additionally bolstered by the discovery of their equivalence to wide neural networks in certain regimes. However, a key feature of DNNs is their ability to scale the model size and training data size independently, whereas in traditional kernel machines model size is tied to data size. Because of this coupling, scaling kernel machines to large data has been computationally challenging. In this paper, we provide a way forward for constructing large-scale general kernel models, which are a generalization of kernel machines that decouples the model and data, allowing training on large datasets. Specifically, we introduce EigenPro 3.0, an algorithm based on projected dual preconditioned SGD and show scaling to model and data sizes which have not been possible with existing kernel methods. We provide a PyTorch based implementation which can take advantage of multiple GPUs.} }
Endnote
%0 Conference Paper %T Toward Large Kernel Models %A Amirhesam Abedsoltan %A Mikhail Belkin %A Parthe Pandit %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-abedsoltan23a %I PMLR %P 61--78 %U https://proceedings.mlr.press/v202/abedsoltan23a.html %V 202 %X Recent studies indicate that kernel machines can often perform similarly or better than deep neural networks (DNNs) on small datasets. The interest in kernel machines has been additionally bolstered by the discovery of their equivalence to wide neural networks in certain regimes. However, a key feature of DNNs is their ability to scale the model size and training data size independently, whereas in traditional kernel machines model size is tied to data size. Because of this coupling, scaling kernel machines to large data has been computationally challenging. In this paper, we provide a way forward for constructing large-scale general kernel models, which are a generalization of kernel machines that decouples the model and data, allowing training on large datasets. Specifically, we introduce EigenPro 3.0, an algorithm based on projected dual preconditioned SGD and show scaling to model and data sizes which have not been possible with existing kernel methods. We provide a PyTorch based implementation which can take advantage of multiple GPUs.
APA
Abedsoltan, A., Belkin, M. & Pandit, P.. (2023). Toward Large Kernel Models. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:61-78 Available from https://proceedings.mlr.press/v202/abedsoltan23a.html.

Related Material