Selective Preference Aggregation

Shreyas Kadekodi, Hayden Mctavish, Berk Ustun
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:28644-28669, 2025.

Abstract

Many applications in machine learning and decision-making rely on procedures to aggregate human preferences. In such tasks, individual express ordinal preferences over a set of items through votes, ratings, or pairwise comparisons. We then summarize their collective preferences as a ranking. Standard methods for preference aggregation are designed to return rankings that arbitrate individual disagreements in ways that are faithful and fair. In this work, we introduce a paradigm for selective aggregation, where we can avoid the need to arbitrate dissent by abstaining from comparison. We summarize collective preferences as a selective ranking – i.e., a partial order where we can only compare items where at least $100\cdot(1 - \tau)%$ of individuals agree. We develop algorithms to build selective rankings that achieve all possible trade-offs between comparability and disagreement, and derive formal guarantees on their safety and stability. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. Our results show selective aggregation can promote transparency and robustness by revealing disagreement and abstaining from arbitration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kadekodi25a, title = {Selective Preference Aggregation}, author = {Kadekodi, Shreyas and Mctavish, Hayden and Ustun, Berk}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {28644--28669}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kadekodi25a/kadekodi25a.pdf}, url = {https://proceedings.mlr.press/v267/kadekodi25a.html}, abstract = {Many applications in machine learning and decision-making rely on procedures to aggregate human preferences. In such tasks, individual express ordinal preferences over a set of items through votes, ratings, or pairwise comparisons. We then summarize their collective preferences as a ranking. Standard methods for preference aggregation are designed to return rankings that arbitrate individual disagreements in ways that are faithful and fair. In this work, we introduce a paradigm for selective aggregation, where we can avoid the need to arbitrate dissent by abstaining from comparison. We summarize collective preferences as a selective ranking – i.e., a partial order where we can only compare items where at least $100\cdot(1 - \tau)%$ of individuals agree. We develop algorithms to build selective rankings that achieve all possible trade-offs between comparability and disagreement, and derive formal guarantees on their safety and stability. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. Our results show selective aggregation can promote transparency and robustness by revealing disagreement and abstaining from arbitration.} }
Endnote
%0 Conference Paper %T Selective Preference Aggregation %A Shreyas Kadekodi %A Hayden Mctavish %A Berk Ustun %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kadekodi25a %I PMLR %P 28644--28669 %U https://proceedings.mlr.press/v267/kadekodi25a.html %V 267 %X Many applications in machine learning and decision-making rely on procedures to aggregate human preferences. In such tasks, individual express ordinal preferences over a set of items through votes, ratings, or pairwise comparisons. We then summarize their collective preferences as a ranking. Standard methods for preference aggregation are designed to return rankings that arbitrate individual disagreements in ways that are faithful and fair. In this work, we introduce a paradigm for selective aggregation, where we can avoid the need to arbitrate dissent by abstaining from comparison. We summarize collective preferences as a selective ranking – i.e., a partial order where we can only compare items where at least $100\cdot(1 - \tau)%$ of individuals agree. We develop algorithms to build selective rankings that achieve all possible trade-offs between comparability and disagreement, and derive formal guarantees on their safety and stability. We conduct an extensive set of experiments on real-world datasets to benchmark our approach and demonstrate its functionality. Our results show selective aggregation can promote transparency and robustness by revealing disagreement and abstaining from arbitration.
APA
Kadekodi, S., Mctavish, H. & Ustun, B.. (2025). Selective Preference Aggregation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:28644-28669 Available from https://proceedings.mlr.press/v267/kadekodi25a.html.

Related Material