Stretching the Effectiveness of MLE from Accuracy to Bias for Pairwise Comparisons

Jingyan Wang, Nihar Shah, R Ravi
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:66-76, 2020.

Abstract

A number of applications (e.g., AI bot tournaments, sports, peer grading, crowdsourcing) use pairwise comparison data and the Bradley-Terry-Luce (BTL) model to evaluate a given collection of items (e.g., bots, teams, students, search results). Past work has shown that under the BTL model, the widely-used maximum-likelihood estimator (MLE) is minimax-optimal in estimating the item parameters, in terms of the mean squared error. However, another important desideratum for designing estimators is fairness. In this work, we consider one specific type of fairness, which is the notion of bias in statistics. We show that the MLE incurs a suboptimal rate in terms of bias. We then propose a simple modification to the MLE, which "stretches" the bounding box of the maximum-likelihood optimizer by a small constant factor from the underlying ground truth domain. We show that this simple modification leads to an improved rate in bias, while maintaining minimax-optimality in the mean squared error. In this manner, our proposed class of estimators provably improves fairness in the sense of bias without loss in accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-wang20a, title = {Stretching the Effectiveness of MLE from Accuracy to Bias for Pairwise Comparisons}, author = {Wang, Jingyan and Shah, Nihar and Ravi, R}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {66--76}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/wang20a/wang20a.pdf}, url = {https://proceedings.mlr.press/v108/wang20a.html}, abstract = {A number of applications (e.g., AI bot tournaments, sports, peer grading, crowdsourcing) use pairwise comparison data and the Bradley-Terry-Luce (BTL) model to evaluate a given collection of items (e.g., bots, teams, students, search results). Past work has shown that under the BTL model, the widely-used maximum-likelihood estimator (MLE) is minimax-optimal in estimating the item parameters, in terms of the mean squared error. However, another important desideratum for designing estimators is fairness. In this work, we consider one specific type of fairness, which is the notion of bias in statistics. We show that the MLE incurs a suboptimal rate in terms of bias. We then propose a simple modification to the MLE, which "stretches" the bounding box of the maximum-likelihood optimizer by a small constant factor from the underlying ground truth domain. We show that this simple modification leads to an improved rate in bias, while maintaining minimax-optimality in the mean squared error. In this manner, our proposed class of estimators provably improves fairness in the sense of bias without loss in accuracy.} }
Endnote
%0 Conference Paper %T Stretching the Effectiveness of MLE from Accuracy to Bias for Pairwise Comparisons %A Jingyan Wang %A Nihar Shah %A R Ravi %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-wang20a %I PMLR %P 66--76 %U https://proceedings.mlr.press/v108/wang20a.html %V 108 %X A number of applications (e.g., AI bot tournaments, sports, peer grading, crowdsourcing) use pairwise comparison data and the Bradley-Terry-Luce (BTL) model to evaluate a given collection of items (e.g., bots, teams, students, search results). Past work has shown that under the BTL model, the widely-used maximum-likelihood estimator (MLE) is minimax-optimal in estimating the item parameters, in terms of the mean squared error. However, another important desideratum for designing estimators is fairness. In this work, we consider one specific type of fairness, which is the notion of bias in statistics. We show that the MLE incurs a suboptimal rate in terms of bias. We then propose a simple modification to the MLE, which "stretches" the bounding box of the maximum-likelihood optimizer by a small constant factor from the underlying ground truth domain. We show that this simple modification leads to an improved rate in bias, while maintaining minimax-optimality in the mean squared error. In this manner, our proposed class of estimators provably improves fairness in the sense of bias without loss in accuracy.
APA
Wang, J., Shah, N. & Ravi, R.. (2020). Stretching the Effectiveness of MLE from Accuracy to Bias for Pairwise Comparisons. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:66-76 Available from https://proceedings.mlr.press/v108/wang20a.html.

Related Material