On NDCG Consistency of Listwise Ranking Methods

Pradeep Ravikumar, Ambuj Tewari, Eunho Yang
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:618-626, 2011.

Abstract

We examine the consistency of listwise ranking methods with respect to the popular Normalized Discounted Cumulative Gain (NDCG) criterion. The most successful listwise approaches replace NDCG with a surrogate that is easier to optimize. We characterize NDCG consistent surrogates to discover a surprising fact: several commonly used surrogates are NDCG inconsistent. We then show how to change them so that they become NDCG consistent in a strong but natural sense. An explicit characterization of strong NDCG consistency is provided. Going beyond qualitative consistency considerations, we also give quantitive statements that enable us to transform the excess error, as measured in the surrogate, to the excess error in comparison to the Bayes optimal ranking function for NDCG. Finally, we also derive improved results if a certain natural “low noise” or “large margin” condition holds. Our experiments demonstrate that ensuring NDCG consistency does improve the performance of listwise ranking methods on real-world datasets. Moreover, a novel surrogate function suggested by our theoretical results leads to further improvements over NDCG consistent versions of existing surrogates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-ravikumar11a, title = {On NDCG Consistency of Listwise Ranking Methods}, author = {Ravikumar, Pradeep and Tewari, Ambuj and Yang, Eunho}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {618--626}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/ravikumar11a/ravikumar11a.pdf}, url = {https://proceedings.mlr.press/v15/ravikumar11a.html}, abstract = {We examine the consistency of listwise ranking methods with respect to the popular Normalized Discounted Cumulative Gain (NDCG) criterion. The most successful listwise approaches replace NDCG with a surrogate that is easier to optimize. We characterize NDCG consistent surrogates to discover a surprising fact: several commonly used surrogates are NDCG inconsistent. We then show how to change them so that they become NDCG consistent in a strong but natural sense. An explicit characterization of strong NDCG consistency is provided. Going beyond qualitative consistency considerations, we also give quantitive statements that enable us to transform the excess error, as measured in the surrogate, to the excess error in comparison to the Bayes optimal ranking function for NDCG. Finally, we also derive improved results if a certain natural “low noise” or “large margin” condition holds. Our experiments demonstrate that ensuring NDCG consistency does improve the performance of listwise ranking methods on real-world datasets. Moreover, a novel surrogate function suggested by our theoretical results leads to further improvements over NDCG consistent versions of existing surrogates.} }
Endnote
%0 Conference Paper %T On NDCG Consistency of Listwise Ranking Methods %A Pradeep Ravikumar %A Ambuj Tewari %A Eunho Yang %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-ravikumar11a %I PMLR %P 618--626 %U https://proceedings.mlr.press/v15/ravikumar11a.html %V 15 %X We examine the consistency of listwise ranking methods with respect to the popular Normalized Discounted Cumulative Gain (NDCG) criterion. The most successful listwise approaches replace NDCG with a surrogate that is easier to optimize. We characterize NDCG consistent surrogates to discover a surprising fact: several commonly used surrogates are NDCG inconsistent. We then show how to change them so that they become NDCG consistent in a strong but natural sense. An explicit characterization of strong NDCG consistency is provided. Going beyond qualitative consistency considerations, we also give quantitive statements that enable us to transform the excess error, as measured in the surrogate, to the excess error in comparison to the Bayes optimal ranking function for NDCG. Finally, we also derive improved results if a certain natural “low noise” or “large margin” condition holds. Our experiments demonstrate that ensuring NDCG consistency does improve the performance of listwise ranking methods on real-world datasets. Moreover, a novel surrogate function suggested by our theoretical results leads to further improvements over NDCG consistent versions of existing surrogates.
RIS
TY - CPAPER TI - On NDCG Consistency of Listwise Ranking Methods AU - Pradeep Ravikumar AU - Ambuj Tewari AU - Eunho Yang BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-ravikumar11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 618 EP - 626 L1 - http://proceedings.mlr.press/v15/ravikumar11a/ravikumar11a.pdf UR - https://proceedings.mlr.press/v15/ravikumar11a.html AB - We examine the consistency of listwise ranking methods with respect to the popular Normalized Discounted Cumulative Gain (NDCG) criterion. The most successful listwise approaches replace NDCG with a surrogate that is easier to optimize. We characterize NDCG consistent surrogates to discover a surprising fact: several commonly used surrogates are NDCG inconsistent. We then show how to change them so that they become NDCG consistent in a strong but natural sense. An explicit characterization of strong NDCG consistency is provided. Going beyond qualitative consistency considerations, we also give quantitive statements that enable us to transform the excess error, as measured in the surrogate, to the excess error in comparison to the Bayes optimal ranking function for NDCG. Finally, we also derive improved results if a certain natural “low noise” or “large margin” condition holds. Our experiments demonstrate that ensuring NDCG consistency does improve the performance of listwise ranking methods on real-world datasets. Moreover, a novel surrogate function suggested by our theoretical results leads to further improvements over NDCG consistent versions of existing surrogates. ER -
APA
Ravikumar, P., Tewari, A. & Yang, E.. (2011). On NDCG Consistency of Listwise Ranking Methods. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:618-626 Available from https://proceedings.mlr.press/v15/ravikumar11a.html.

Related Material