Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss

Zhenhuan Yang, Yunwen Lei, Siwei Lyu, Yiming Ying
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2026-2034, 2021.

Abstract

Pairwise learning has recently received increasing attention since it subsumes many important machine learning tasks (e.g. AUC maximization and metric learning) into a unifying framework. In this paper, we give the first-ever-known stability and generalization analysis of stochastic gradient descent (SGD) for pairwise learning with non-smooth loss functions, which are widely used (e.g. Ranking SVM with the hinge loss). We introduce a novel decomposition in its stability analysis to decouple the pairwisely dependent random variables, and derive generalization bounds consistent with pointwise learning. Furthermore, we apply our stability analysis to develop differentially private SGD for pairwise learning, for which our utility bounds match with the state-of-the-art output perturbation method (Huai et al., 2020) with smooth losses. Finally, we illustrate the results using specific examples of AUC maximization and similarity metric learning. As a byproduct, we provide an affirmative solution to an open question on the advantage of the nuclear-norm constraint over Frobenius norm constraint in similarity metric learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-yang21c, title = { Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss }, author = {Yang, Zhenhuan and Lei, Yunwen and Lyu, Siwei and Ying, Yiming}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2026--2034}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/yang21c/yang21c.pdf}, url = {https://proceedings.mlr.press/v130/yang21c.html}, abstract = { Pairwise learning has recently received increasing attention since it subsumes many important machine learning tasks (e.g. AUC maximization and metric learning) into a unifying framework. In this paper, we give the first-ever-known stability and generalization analysis of stochastic gradient descent (SGD) for pairwise learning with non-smooth loss functions, which are widely used (e.g. Ranking SVM with the hinge loss). We introduce a novel decomposition in its stability analysis to decouple the pairwisely dependent random variables, and derive generalization bounds consistent with pointwise learning. Furthermore, we apply our stability analysis to develop differentially private SGD for pairwise learning, for which our utility bounds match with the state-of-the-art output perturbation method (Huai et al., 2020) with smooth losses. Finally, we illustrate the results using specific examples of AUC maximization and similarity metric learning. As a byproduct, we provide an affirmative solution to an open question on the advantage of the nuclear-norm constraint over Frobenius norm constraint in similarity metric learning. } }
Endnote
%0 Conference Paper %T Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss %A Zhenhuan Yang %A Yunwen Lei %A Siwei Lyu %A Yiming Ying %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-yang21c %I PMLR %P 2026--2034 %U https://proceedings.mlr.press/v130/yang21c.html %V 130 %X Pairwise learning has recently received increasing attention since it subsumes many important machine learning tasks (e.g. AUC maximization and metric learning) into a unifying framework. In this paper, we give the first-ever-known stability and generalization analysis of stochastic gradient descent (SGD) for pairwise learning with non-smooth loss functions, which are widely used (e.g. Ranking SVM with the hinge loss). We introduce a novel decomposition in its stability analysis to decouple the pairwisely dependent random variables, and derive generalization bounds consistent with pointwise learning. Furthermore, we apply our stability analysis to develop differentially private SGD for pairwise learning, for which our utility bounds match with the state-of-the-art output perturbation method (Huai et al., 2020) with smooth losses. Finally, we illustrate the results using specific examples of AUC maximization and similarity metric learning. As a byproduct, we provide an affirmative solution to an open question on the advantage of the nuclear-norm constraint over Frobenius norm constraint in similarity metric learning.
APA
Yang, Z., Lei, Y., Lyu, S. & Ying, Y.. (2021). Stability and Differential Privacy of Stochastic Gradient Descent for Pairwise Learning with Non-Smooth Loss . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2026-2034 Available from https://proceedings.mlr.press/v130/yang21c.html.

Related Material