A Probabilistic Theory of Supervised Similarity Learning for Pointwise ROC Curve Optimization
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:50655074, 2018.
Abstract
The performance of many machine learning techniques depends on the choice of an appropriate similarity or distance measure on the input space. Similarity learning (or metric learning) aims at building such a measure from training data so that observations with the same (resp. different) label are as close (resp. far) as possible. In this paper, similarity learning is investigated from the perspective of pairwise bipartite ranking, where the goal is to rank the elements of a database by decreasing order of the probability that they share the same label with some query data point, based on the similarity scores. A natural performance criterion in this setting is pointwise ROC optimization: maximize the true positive rate under a fixed false positive rate. We study this novel perspective on similarity learning through a rigorous probabilistic framework. The empirical version of the problem gives rise to a constrained optimization formulation involving Ustatistics, for which we derive universal learning rates as well as faster rates under a noise assumption on the data distribution. We also address the largescale setting by analyzing the effect of samplingbased approximations. Our theoretical results are supported by illustrative numerical experiments.
Related Material


