A Graph Theoretic Approach for Preference Learning with Feature Information

Aadirupa Saha, Arun Rajkumar
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:3138-3158, 2024.

Abstract

We consider the problem of ranking a set of $n$ items given a sample of their pairwise preferences. It is well known from the classical results of sorting literature that without any further assumption, one requires a sample size of $\Omega(n \log n)$ with active selection of pairs whereas, for a random set pairwise preferences the bound could be as bad as $\Omega(n^2)$. However, what if the learner is exposed to additional knowledge of the items features and their pairwise preferences are known to be modelled in terms of their feature similarities – can these bounds be improved? In particular, we introduce a new probabilistic preference model, called feature-Bradley-Terry-Luce (f-BTL) for the purpose, and present a new least squares based algorithm, fBTL-LS, which requires a sample complexity much lesser than $O(n\log n)$ random pairs to obtain a ‘good’ ranking. The sample complexity of our proposed algorithms depends on the degree of feature correlation of the items that makes use of tools from classical graph matching theory, shedding light on the true complexity of the problem – this was not possible before with existing matrix completion based tools. We also prove tightness of our results showing a matching information theoretic lower bound for the problem. Our theoretical results are corroborated with extensive experimental evaluations on varying datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-saha24a, title = {A Graph Theoretic Approach for Preference Learning with Feature Information}, author = {Saha, Aadirupa and Rajkumar, Arun}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {3138--3158}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/saha24a/saha24a.pdf}, url = {https://proceedings.mlr.press/v244/saha24a.html}, abstract = {We consider the problem of ranking a set of $n$ items given a sample of their pairwise preferences. It is well known from the classical results of sorting literature that without any further assumption, one requires a sample size of $\Omega(n \log n)$ with active selection of pairs whereas, for a random set pairwise preferences the bound could be as bad as $\Omega(n^2)$. However, what if the learner is exposed to additional knowledge of the items features and their pairwise preferences are known to be modelled in terms of their feature similarities – can these bounds be improved? In particular, we introduce a new probabilistic preference model, called feature-Bradley-Terry-Luce (f-BTL) for the purpose, and present a new least squares based algorithm, fBTL-LS, which requires a sample complexity much lesser than $O(n\log n)$ random pairs to obtain a ‘good’ ranking. The sample complexity of our proposed algorithms depends on the degree of feature correlation of the items that makes use of tools from classical graph matching theory, shedding light on the true complexity of the problem – this was not possible before with existing matrix completion based tools. We also prove tightness of our results showing a matching information theoretic lower bound for the problem. Our theoretical results are corroborated with extensive experimental evaluations on varying datasets.} }
Endnote
%0 Conference Paper %T A Graph Theoretic Approach for Preference Learning with Feature Information %A Aadirupa Saha %A Arun Rajkumar %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-saha24a %I PMLR %P 3138--3158 %U https://proceedings.mlr.press/v244/saha24a.html %V 244 %X We consider the problem of ranking a set of $n$ items given a sample of their pairwise preferences. It is well known from the classical results of sorting literature that without any further assumption, one requires a sample size of $\Omega(n \log n)$ with active selection of pairs whereas, for a random set pairwise preferences the bound could be as bad as $\Omega(n^2)$. However, what if the learner is exposed to additional knowledge of the items features and their pairwise preferences are known to be modelled in terms of their feature similarities – can these bounds be improved? In particular, we introduce a new probabilistic preference model, called feature-Bradley-Terry-Luce (f-BTL) for the purpose, and present a new least squares based algorithm, fBTL-LS, which requires a sample complexity much lesser than $O(n\log n)$ random pairs to obtain a ‘good’ ranking. The sample complexity of our proposed algorithms depends on the degree of feature correlation of the items that makes use of tools from classical graph matching theory, shedding light on the true complexity of the problem – this was not possible before with existing matrix completion based tools. We also prove tightness of our results showing a matching information theoretic lower bound for the problem. Our theoretical results are corroborated with extensive experimental evaluations on varying datasets.
APA
Saha, A. & Rajkumar, A.. (2024). A Graph Theoretic Approach for Preference Learning with Feature Information. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:3138-3158 Available from https://proceedings.mlr.press/v244/saha24a.html.

Related Material