Performance Metric Elicitation from Pairwise Classifier Comparisons
[edit]
Proceedings of Machine Learning Research, PMLR 89:371379, 2019.
Abstract
Given a binary prediction problem, which performance metric should the classifier optimize? We address this question by formalizing the problem of Metric Elicitation. The goal of metric elicitation is to discover the performance metric of a practitioner, which reflects her innate rewards (costs) for correct (incorrect) classification. In particular, we focus on eliciting binary classification performance metrics from pairwise feedback, where a practitioner is queried to provide relative preference between two classifiers. By exploiting key geometric properties of the space of confusion matrices, we obtain provably query efficient algorithms for eliciting linear and linearfractional performance metrics. We further show that our method is robust to feedback and finite sample noise.
Related Material


