Learning Inconsistent Preferences with Gaussian Processes

Siu Lun Chau, Javier Gonzalez, Dino Sejdinovic
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:2266-2281, 2022.

Abstract

We revisit widely used preferential Gaussian processes (PGP) by Chu and Ghahramani [2005] and challenge their modelling assumption that imposes rankability of data items via latent utility function values. We propose a generalisation of PGP which can capture more expressive latent preferential structures in the data and thus be used to model inconsistent preferences, i.e. where transitivity is violated, or to discover clusters of comparable items via spectral decomposition of the learned preference functions. We also consider the properties of associated covariance kernel functions and its reproducing kernel Hilbert Space (RKHS), giving a simple construction that satisfies universality in the space of preference functions. Finally, we provide an extensive set of numerical experiments on simulated and real-world datasets showcasing the competitiveness of our proposed method with state-of-the-art. Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-lun-chau22a, title = { Learning Inconsistent Preferences with Gaussian Processes }, author = {Lun Chau, Siu and Gonzalez, Javier and Sejdinovic, Dino}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {2266--2281}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/lun-chau22a/lun-chau22a.pdf}, url = {https://proceedings.mlr.press/v151/lun-chau22a.html}, abstract = { We revisit widely used preferential Gaussian processes (PGP) by Chu and Ghahramani [2005] and challenge their modelling assumption that imposes rankability of data items via latent utility function values. We propose a generalisation of PGP which can capture more expressive latent preferential structures in the data and thus be used to model inconsistent preferences, i.e. where transitivity is violated, or to discover clusters of comparable items via spectral decomposition of the learned preference functions. We also consider the properties of associated covariance kernel functions and its reproducing kernel Hilbert Space (RKHS), giving a simple construction that satisfies universality in the space of preference functions. Finally, we provide an extensive set of numerical experiments on simulated and real-world datasets showcasing the competitiveness of our proposed method with state-of-the-art. Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data. } }
Endnote
%0 Conference Paper %T Learning Inconsistent Preferences with Gaussian Processes %A Siu Lun Chau %A Javier Gonzalez %A Dino Sejdinovic %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-lun-chau22a %I PMLR %P 2266--2281 %U https://proceedings.mlr.press/v151/lun-chau22a.html %V 151 %X We revisit widely used preferential Gaussian processes (PGP) by Chu and Ghahramani [2005] and challenge their modelling assumption that imposes rankability of data items via latent utility function values. We propose a generalisation of PGP which can capture more expressive latent preferential structures in the data and thus be used to model inconsistent preferences, i.e. where transitivity is violated, or to discover clusters of comparable items via spectral decomposition of the learned preference functions. We also consider the properties of associated covariance kernel functions and its reproducing kernel Hilbert Space (RKHS), giving a simple construction that satisfies universality in the space of preference functions. Finally, we provide an extensive set of numerical experiments on simulated and real-world datasets showcasing the competitiveness of our proposed method with state-of-the-art. Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data.
APA
Lun Chau, S., Gonzalez, J. & Sejdinovic, D.. (2022). Learning Inconsistent Preferences with Gaussian Processes . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:2266-2281 Available from https://proceedings.mlr.press/v151/lun-chau22a.html.

Related Material