RankDistil: Knowledge Distillation for Ranking

Sashank Reddi, Rama Kumar Pasumarthi, Aditya Menon, Ankit Singh Rawat, Felix Yu, Seungyeon Kim, Andreas Veit, Sanjiv Kumar
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2368-2376, 2021.

Abstract

Knowledge distillation is an approach to improve the performance of a student model by using the knowledge of a complex teacher. Despite its success in several deep learning applications, the study of distillation is mostly confined to classification settings. In particular, the use of distillation in top-k ranking settings, where the goal is to rank k most relevant items correctly, remains largely unexplored. In this paper, we study such ranking problems through the lens of distillation. We present a distillation framework for top-k ranking and draw connections with the existing ranking methods. The core idea of this framework is to preserve the ranking at the top by matching the order of items of student and teacher, while penalizing large scores for items ranked low by the teacher. Building on this, we develop a novel distillation approach, RankDistil, specifically catered towards ranking problems with a large number of items to rank, and establish statistical basis for the method. Finally, we conduct experiments which demonstrate that RankDistil yields benefits over commonly used baselines for ranking problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-reddi21a, title = { RankDistil: Knowledge Distillation for Ranking }, author = {Reddi, Sashank and Kumar Pasumarthi, Rama and Menon, Aditya and Singh Rawat, Ankit and Yu, Felix and Kim, Seungyeon and Veit, Andreas and Kumar, Sanjiv}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2368--2376}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/reddi21a/reddi21a.pdf}, url = {https://proceedings.mlr.press/v130/reddi21a.html}, abstract = { Knowledge distillation is an approach to improve the performance of a student model by using the knowledge of a complex teacher. Despite its success in several deep learning applications, the study of distillation is mostly confined to classification settings. In particular, the use of distillation in top-k ranking settings, where the goal is to rank k most relevant items correctly, remains largely unexplored. In this paper, we study such ranking problems through the lens of distillation. We present a distillation framework for top-k ranking and draw connections with the existing ranking methods. The core idea of this framework is to preserve the ranking at the top by matching the order of items of student and teacher, while penalizing large scores for items ranked low by the teacher. Building on this, we develop a novel distillation approach, RankDistil, specifically catered towards ranking problems with a large number of items to rank, and establish statistical basis for the method. Finally, we conduct experiments which demonstrate that RankDistil yields benefits over commonly used baselines for ranking problems. } }
Endnote
%0 Conference Paper %T RankDistil: Knowledge Distillation for Ranking %A Sashank Reddi %A Rama Kumar Pasumarthi %A Aditya Menon %A Ankit Singh Rawat %A Felix Yu %A Seungyeon Kim %A Andreas Veit %A Sanjiv Kumar %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-reddi21a %I PMLR %P 2368--2376 %U https://proceedings.mlr.press/v130/reddi21a.html %V 130 %X Knowledge distillation is an approach to improve the performance of a student model by using the knowledge of a complex teacher. Despite its success in several deep learning applications, the study of distillation is mostly confined to classification settings. In particular, the use of distillation in top-k ranking settings, where the goal is to rank k most relevant items correctly, remains largely unexplored. In this paper, we study such ranking problems through the lens of distillation. We present a distillation framework for top-k ranking and draw connections with the existing ranking methods. The core idea of this framework is to preserve the ranking at the top by matching the order of items of student and teacher, while penalizing large scores for items ranked low by the teacher. Building on this, we develop a novel distillation approach, RankDistil, specifically catered towards ranking problems with a large number of items to rank, and establish statistical basis for the method. Finally, we conduct experiments which demonstrate that RankDistil yields benefits over commonly used baselines for ranking problems.
APA
Reddi, S., Kumar Pasumarthi, R., Menon, A., Singh Rawat, A., Yu, F., Kim, S., Veit, A. & Kumar, S.. (2021). RankDistil: Knowledge Distillation for Ranking . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2368-2376 Available from https://proceedings.mlr.press/v130/reddi21a.html.

Related Material