Variance Reduced Online Gradient Descent for Kernelized Pairwise Learning with Limited Memory

Hilal AlQuabeh, Bhaskar Mukhoty, Bin Gu
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:28-43, 2024.

Abstract

Pairwise learning is essential in machine learning, especially for problems involving loss functions defined on pairs of training examples. Online gradient descent (OGD) algorithms have been proposed to handle online pairwise learning, where data arrives sequentially. However, the pairwise nature of the problem makes scalability challenging, as the gradient computation for a new sample involves all past samples. Recent advancements in OGD algorithms have aimed to reduce the complexity of calculating online gradients, achieving complexities less than $O(T)$ and even as low as $O(1)$. However, these approaches are primarily limited to linear models and have induced variance. In this study, we propose a limited memory OGD algorithm that extends to kernel online pairwise learning while improving the sublinear regret. Specifically, we establish a clear connection between the variance of online gradients and the regret, and construct online gradients using the most recent stratified samples with a limited buffer of size of $s$ representing all past data, which have a complexity of $O(sT)$ and employs $O(\sqrt{T}\log{T})$ random Fourier features for kernel approximation. Importantly, our theoretical results demonstrate that the variance-reduced online gradients lead to an improved sublinear regret bound. The experiments on real-world datasets demonstrate the superiority of our algorithm over both kernelized and linear online pairwise learning algorithms.The code is available at https://github.com/halquabeh/ACML-2023-FPOGD-Code.git.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-alquabeh24a, title = {Variance Reduced Online Gradient Descent for Kernelized Pairwise Learning with Limited Memory}, author = {AlQuabeh, Hilal and Mukhoty, Bhaskar and Gu, Bin}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {28--43}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/alquabeh24a/alquabeh24a.pdf}, url = {https://proceedings.mlr.press/v222/alquabeh24a.html}, abstract = {Pairwise learning is essential in machine learning, especially for problems involving loss functions defined on pairs of training examples. Online gradient descent (OGD) algorithms have been proposed to handle online pairwise learning, where data arrives sequentially. However, the pairwise nature of the problem makes scalability challenging, as the gradient computation for a new sample involves all past samples. Recent advancements in OGD algorithms have aimed to reduce the complexity of calculating online gradients, achieving complexities less than $O(T)$ and even as low as $O(1)$. However, these approaches are primarily limited to linear models and have induced variance. In this study, we propose a limited memory OGD algorithm that extends to kernel online pairwise learning while improving the sublinear regret. Specifically, we establish a clear connection between the variance of online gradients and the regret, and construct online gradients using the most recent stratified samples with a limited buffer of size of $s$ representing all past data, which have a complexity of $O(sT)$ and employs $O(\sqrt{T}\log{T})$ random Fourier features for kernel approximation. Importantly, our theoretical results demonstrate that the variance-reduced online gradients lead to an improved sublinear regret bound. The experiments on real-world datasets demonstrate the superiority of our algorithm over both kernelized and linear online pairwise learning algorithms.The code is available at https://github.com/halquabeh/ACML-2023-FPOGD-Code.git.} }
Endnote
%0 Conference Paper %T Variance Reduced Online Gradient Descent for Kernelized Pairwise Learning with Limited Memory %A Hilal AlQuabeh %A Bhaskar Mukhoty %A Bin Gu %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-alquabeh24a %I PMLR %P 28--43 %U https://proceedings.mlr.press/v222/alquabeh24a.html %V 222 %X Pairwise learning is essential in machine learning, especially for problems involving loss functions defined on pairs of training examples. Online gradient descent (OGD) algorithms have been proposed to handle online pairwise learning, where data arrives sequentially. However, the pairwise nature of the problem makes scalability challenging, as the gradient computation for a new sample involves all past samples. Recent advancements in OGD algorithms have aimed to reduce the complexity of calculating online gradients, achieving complexities less than $O(T)$ and even as low as $O(1)$. However, these approaches are primarily limited to linear models and have induced variance. In this study, we propose a limited memory OGD algorithm that extends to kernel online pairwise learning while improving the sublinear regret. Specifically, we establish a clear connection between the variance of online gradients and the regret, and construct online gradients using the most recent stratified samples with a limited buffer of size of $s$ representing all past data, which have a complexity of $O(sT)$ and employs $O(\sqrt{T}\log{T})$ random Fourier features for kernel approximation. Importantly, our theoretical results demonstrate that the variance-reduced online gradients lead to an improved sublinear regret bound. The experiments on real-world datasets demonstrate the superiority of our algorithm over both kernelized and linear online pairwise learning algorithms.The code is available at https://github.com/halquabeh/ACML-2023-FPOGD-Code.git.
APA
AlQuabeh, H., Mukhoty, B. & Gu, B.. (2024). Variance Reduced Online Gradient Descent for Kernelized Pairwise Learning with Limited Memory. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:28-43 Available from https://proceedings.mlr.press/v222/alquabeh24a.html.

Related Material