Uniform Convergence of Rank-weighted Learning

Justin Khim, Liu Leqi, Adarsh Prasad, Pradeep Ravikumar
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5254-5263, 2020.

Abstract

The decision-theoretic foundations of classical machine learning models have largely focused on estimating model parameters that minimize the expectation of a given loss function. However, as machine learning models are deployed in varied contexts, such as in high-stakes decision-making and societal settings, it is clear that these models are not just evaluated by their average performances. In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning. These L-Risks, induced by rank-dependent weighting functions with bounded variation, is a unification of popular risk measures such as conditional value-at-risk and those defined by cumulative prospect theory. We give uniform convergence bounds of this broad class of risk measures and study their consequences on a logistic regression example.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-khim20a, title = {Uniform Convergence of Rank-weighted Learning}, author = {Khim, Justin and Leqi, Liu and Prasad, Adarsh and Ravikumar, Pradeep}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5254--5263}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/khim20a/khim20a.pdf}, url = {https://proceedings.mlr.press/v119/khim20a.html}, abstract = {The decision-theoretic foundations of classical machine learning models have largely focused on estimating model parameters that minimize the expectation of a given loss function. However, as machine learning models are deployed in varied contexts, such as in high-stakes decision-making and societal settings, it is clear that these models are not just evaluated by their average performances. In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning. These L-Risks, induced by rank-dependent weighting functions with bounded variation, is a unification of popular risk measures such as conditional value-at-risk and those defined by cumulative prospect theory. We give uniform convergence bounds of this broad class of risk measures and study their consequences on a logistic regression example.} }
Endnote
%0 Conference Paper %T Uniform Convergence of Rank-weighted Learning %A Justin Khim %A Liu Leqi %A Adarsh Prasad %A Pradeep Ravikumar %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-khim20a %I PMLR %P 5254--5263 %U https://proceedings.mlr.press/v119/khim20a.html %V 119 %X The decision-theoretic foundations of classical machine learning models have largely focused on estimating model parameters that minimize the expectation of a given loss function. However, as machine learning models are deployed in varied contexts, such as in high-stakes decision-making and societal settings, it is clear that these models are not just evaluated by their average performances. In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning. These L-Risks, induced by rank-dependent weighting functions with bounded variation, is a unification of popular risk measures such as conditional value-at-risk and those defined by cumulative prospect theory. We give uniform convergence bounds of this broad class of risk measures and study their consequences on a logistic regression example.
APA
Khim, J., Leqi, L., Prasad, A. & Ravikumar, P.. (2020). Uniform Convergence of Rank-weighted Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5254-5263 Available from https://proceedings.mlr.press/v119/khim20a.html.

Related Material