A Learning Theory of Ranking Aggregation

Anna Korba, Stéphan Clemencon, Eric Sibony
; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1001-1010, 2017.

Abstract

Originally formulated in Social Choice theory, Ranking Aggregation, also referred to as Consensus Ranking, has motivated the development of numerous statistical models since the middle of the 20th century. Recently, the analysis of ranking/preference data has been the subject of a renewed interest in machine-learning, boosted by modern applications such as meta-search engines, giving rise to the design of various scalable algorithmic approaches for approximately computing ranking medians, viewed as solutions of a discrete (generally NP-hard) minimization problem. This paper develops a statistical learning theory for ranking aggregation in a general probabilistic setting (avoiding any rigid ranking model assumptions), assessing the generalization ability of empirical ranking medians. Universal rate bounds are established and the situations where convergence occurs at an exponential rate are fully characterized. Minimax lower bounds are also proved, showing that the rate bounds we obtain are optimal.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-korba17a, title = {{A Learning Theory of Ranking Aggregation}}, author = {Anna Korba and Stéphan Clemencon and Eric Sibony}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1001--1010}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/korba17a/korba17a.pdf}, url = {http://proceedings.mlr.press/v54/korba17a.html}, abstract = {Originally formulated in Social Choice theory, Ranking Aggregation, also referred to as Consensus Ranking, has motivated the development of numerous statistical models since the middle of the 20th century. Recently, the analysis of ranking/preference data has been the subject of a renewed interest in machine-learning, boosted by modern applications such as meta-search engines, giving rise to the design of various scalable algorithmic approaches for approximately computing ranking medians, viewed as solutions of a discrete (generally NP-hard) minimization problem. This paper develops a statistical learning theory for ranking aggregation in a general probabilistic setting (avoiding any rigid ranking model assumptions), assessing the generalization ability of empirical ranking medians. Universal rate bounds are established and the situations where convergence occurs at an exponential rate are fully characterized. Minimax lower bounds are also proved, showing that the rate bounds we obtain are optimal.} }
Endnote
%0 Conference Paper %T A Learning Theory of Ranking Aggregation %A Anna Korba %A Stéphan Clemencon %A Eric Sibony %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-korba17a %I PMLR %J Proceedings of Machine Learning Research %P 1001--1010 %U http://proceedings.mlr.press %V 54 %W PMLR %X Originally formulated in Social Choice theory, Ranking Aggregation, also referred to as Consensus Ranking, has motivated the development of numerous statistical models since the middle of the 20th century. Recently, the analysis of ranking/preference data has been the subject of a renewed interest in machine-learning, boosted by modern applications such as meta-search engines, giving rise to the design of various scalable algorithmic approaches for approximately computing ranking medians, viewed as solutions of a discrete (generally NP-hard) minimization problem. This paper develops a statistical learning theory for ranking aggregation in a general probabilistic setting (avoiding any rigid ranking model assumptions), assessing the generalization ability of empirical ranking medians. Universal rate bounds are established and the situations where convergence occurs at an exponential rate are fully characterized. Minimax lower bounds are also proved, showing that the rate bounds we obtain are optimal.
APA
Korba, A., Clemencon, S. & Sibony, E.. (2017). A Learning Theory of Ranking Aggregation. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:1001-1010

Related Material