Deep Spectral Ranking

Ilkay Yildiz, Jennifer Dy, Deniz Erdogmus, Susan Ostmo, J. Peter Campbell, Michael F. Chiang, Stratis Ioannidis
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:361-369, 2021.

Abstract

Learning from ranking observations arises in many domains, and siamese deep neural networks have shown excellent inference performance in this setting. However, SGD does not scale well, as an epoch grows exponentially with the ranking observation size. We show that a spectral algorithm can be combined with deep learning methods to significantly accelerate training. We combine a spectral estimate of Plackett-Luce ranking scores with a deep model via the Alternating Directions Method of Multipliers with a Kullback-Leibler proximal penalty. Compared to a state-of-the-art siamese network, our algorithms are up to 175 times faster and attain better predictions by up to 26% Top-1 Accuracy and 6% Kendall-Tau correlation over five real-life ranking datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-yildiz21a, title = { Deep Spectral Ranking }, author = {Yildiz, Ilkay and Dy, Jennifer and Erdogmus, Deniz and Ostmo, Susan and Peter Campbell, J. and F. Chiang, Michael and Ioannidis, Stratis}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {361--369}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/yildiz21a/yildiz21a.pdf}, url = {https://proceedings.mlr.press/v130/yildiz21a.html}, abstract = { Learning from ranking observations arises in many domains, and siamese deep neural networks have shown excellent inference performance in this setting. However, SGD does not scale well, as an epoch grows exponentially with the ranking observation size. We show that a spectral algorithm can be combined with deep learning methods to significantly accelerate training. We combine a spectral estimate of Plackett-Luce ranking scores with a deep model via the Alternating Directions Method of Multipliers with a Kullback-Leibler proximal penalty. Compared to a state-of-the-art siamese network, our algorithms are up to 175 times faster and attain better predictions by up to 26% Top-1 Accuracy and 6% Kendall-Tau correlation over five real-life ranking datasets. } }
Endnote
%0 Conference Paper %T Deep Spectral Ranking %A Ilkay Yildiz %A Jennifer Dy %A Deniz Erdogmus %A Susan Ostmo %A J. Peter Campbell %A Michael F. Chiang %A Stratis Ioannidis %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-yildiz21a %I PMLR %P 361--369 %U https://proceedings.mlr.press/v130/yildiz21a.html %V 130 %X Learning from ranking observations arises in many domains, and siamese deep neural networks have shown excellent inference performance in this setting. However, SGD does not scale well, as an epoch grows exponentially with the ranking observation size. We show that a spectral algorithm can be combined with deep learning methods to significantly accelerate training. We combine a spectral estimate of Plackett-Luce ranking scores with a deep model via the Alternating Directions Method of Multipliers with a Kullback-Leibler proximal penalty. Compared to a state-of-the-art siamese network, our algorithms are up to 175 times faster and attain better predictions by up to 26% Top-1 Accuracy and 6% Kendall-Tau correlation over five real-life ranking datasets.
APA
Yildiz, I., Dy, J., Erdogmus, D., Ostmo, S., Peter Campbell, J., F. Chiang, M. & Ioannidis, S.. (2021). Deep Spectral Ranking . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:361-369 Available from https://proceedings.mlr.press/v130/yildiz21a.html.

Related Material