Response Time Improves Gaussian Process Models for Perception and Preferences

Michael Shvartsman, Benjamin Letham, Eytan Bakshy, Stephen Keeley
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:3211-3226, 2024.

Abstract

Models for human choice prediction in preference learning and perception science often use binary response data, requiring many samples to accurately learn latent utilities or perceptual intensities. The response time (RT) to make each choice captures additional information about the decision process, but existing models incorporating RTs for choice prediction do so in a fully parametric way or over discrete inputs. At the same time, state-of-the-art Gaussian process (GP) models of perception and preferences operate on choices only, ignoring RTs. We propose two approaches for incorporating RTs into GP preference and perception models. The first is based on stacking GP models, and the second uses a novel differentiable approximation to the likelihood of the diffusion decision model (DDM), the de-facto standard model for choice RTs. Our RT-choice GPs enable better latent value estimation and held-out choice prediction relative to baselines, which we demonstrate on three real-world multivariate datasets covering both human psychophysics and preference learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-shvartsman24a, title = {Response Time Improves Gaussian Process Models for Perception and Preferences}, author = {Shvartsman, Michael and Letham, Benjamin and Bakshy, Eytan and Keeley, Stephen}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {3211--3226}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/shvartsman24a/shvartsman24a.pdf}, url = {https://proceedings.mlr.press/v244/shvartsman24a.html}, abstract = {Models for human choice prediction in preference learning and perception science often use binary response data, requiring many samples to accurately learn latent utilities or perceptual intensities. The response time (RT) to make each choice captures additional information about the decision process, but existing models incorporating RTs for choice prediction do so in a fully parametric way or over discrete inputs. At the same time, state-of-the-art Gaussian process (GP) models of perception and preferences operate on choices only, ignoring RTs. We propose two approaches for incorporating RTs into GP preference and perception models. The first is based on stacking GP models, and the second uses a novel differentiable approximation to the likelihood of the diffusion decision model (DDM), the de-facto standard model for choice RTs. Our RT-choice GPs enable better latent value estimation and held-out choice prediction relative to baselines, which we demonstrate on three real-world multivariate datasets covering both human psychophysics and preference learning.} }
Endnote
%0 Conference Paper %T Response Time Improves Gaussian Process Models for Perception and Preferences %A Michael Shvartsman %A Benjamin Letham %A Eytan Bakshy %A Stephen Keeley %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-shvartsman24a %I PMLR %P 3211--3226 %U https://proceedings.mlr.press/v244/shvartsman24a.html %V 244 %X Models for human choice prediction in preference learning and perception science often use binary response data, requiring many samples to accurately learn latent utilities or perceptual intensities. The response time (RT) to make each choice captures additional information about the decision process, but existing models incorporating RTs for choice prediction do so in a fully parametric way or over discrete inputs. At the same time, state-of-the-art Gaussian process (GP) models of perception and preferences operate on choices only, ignoring RTs. We propose two approaches for incorporating RTs into GP preference and perception models. The first is based on stacking GP models, and the second uses a novel differentiable approximation to the likelihood of the diffusion decision model (DDM), the de-facto standard model for choice RTs. Our RT-choice GPs enable better latent value estimation and held-out choice prediction relative to baselines, which we demonstrate on three real-world multivariate datasets covering both human psychophysics and preference learning.
APA
Shvartsman, M., Letham, B., Bakshy, E. & Keeley, S.. (2024). Response Time Improves Gaussian Process Models for Perception and Preferences. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:3211-3226 Available from https://proceedings.mlr.press/v244/shvartsman24a.html.

Related Material