[edit]
Metric Learning from Limited Pairwise Preference Comparisons
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:3571-3602, 2024.
Abstract
We study metric learning from preference comparisons under the ideal point model, in which a user prefers an item over another if it is closer to their latent ideal item. These items are embedded into Rd equipped with an unknown Mahalanobis distance shared across users. While recent work shows that it is possible to simultaneously recover the metric and ideal items given O(d) pairwise comparisons per user, in practice we often have a limited budget of o(d) comparisons. We study whether the metric can still be recovered, even though learning individual ideal items is now no longer possible. We show that, on the one hand, o(d) comparisons may not reveal any information about the metric, even with infinitely many users. On the other hand, when comparisons are made over items that exhibit low-dimensional structure, each user can contribute to learning the metric restricted to a low-dimensional subspace so that the metric can be jointly identified. We present a divide-and-conquer approach that achieves this, and provide theoretical recovery guarantees and empirical validation.