On the cognitive alignment between humans and machines

Marco Rothermel, Sayed Soroush Daftarian, Tahmineh A. Koosha, Mohammad-Ali Nikouei Mahani, Hamidreza Jamalabadi
Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, PMLR 285:194-203, 2024.

Abstract

In this paper, we explore the psychological relevance, similarity to brain representations, and subject-invariance of latent space representations in generative models. Using fMRI data from four subjects who viewed over 9,000 visual stimuli, we conducted three experiments to investigate this alignment. First, we assessed whether a linear mapping between the latent space of a generative mode, in this case a very deep VAE (VDVAE), and fMRI brain responses could accurately capture cognitive properties, specifically emotional valence, of the visual stimuli presented to both humans and machines. Second, we examined whether perturbing psychologically relevant dimensions in either the generative model or human brain data would produce corresponding cognitive effects in both systems — across models and human subjects. Third, we investigated whether a nonlinear mapping, approximated via a Taylor expansion up to the fifth degree, would outperform linear mapping in aligning cognitive properties. Our findings revealed three key insights: (1) the latent space of the generative model aligns with fMRI brain responses across all subjects tested (r   0.4, (2) perturbations in the psychologically relevant dimensions of both the fMRI data and the generative model resulted in highly consistent effects across the aligned systems (both the model and human subjects), and (3) a linear mapping, approximated using Ridge regression, performed as well as or better than all Taylor expansions we tested. Together, these results suggest a universal cognitive alignment between humans and between human-model systems. This universality holds significant potential for advancing our understanding of basic cognitive processes and offers promising new avenues for studying mental disorders.

Cite this Paper


BibTeX
@InProceedings{pmlr-v285-rothermel24a, title = {On the cognitive alignment between humans and machines}, author = {Rothermel, Marco and Daftarian, Sayed Soroush and Koosha, Tahmineh A. and Mahani, Mohammad-Ali Nikouei and Jamalabadi, Hamidreza}, booktitle = {Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models}, pages = {194--203}, year = {2024}, editor = {Fumero, Marco and Domine, Clementine and Lähner, Zorah and Crisostomi, Donato and Moschella, Luca and Stachenfeld, Kimberly}, volume = {285}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v285/main/assets/rothermel24a/rothermel24a.pdf}, url = {https://proceedings.mlr.press/v285/rothermel24a.html}, abstract = {In this paper, we explore the psychological relevance, similarity to brain representations, and subject-invariance of latent space representations in generative models. Using fMRI data from four subjects who viewed over 9,000 visual stimuli, we conducted three experiments to investigate this alignment. First, we assessed whether a linear mapping between the latent space of a generative mode, in this case a very deep VAE (VDVAE), and fMRI brain responses could accurately capture cognitive properties, specifically emotional valence, of the visual stimuli presented to both humans and machines. Second, we examined whether perturbing psychologically relevant dimensions in either the generative model or human brain data would produce corresponding cognitive effects in both systems — across models and human subjects. Third, we investigated whether a nonlinear mapping, approximated via a Taylor expansion up to the fifth degree, would outperform linear mapping in aligning cognitive properties. Our findings revealed three key insights: (1) the latent space of the generative model aligns with fMRI brain responses across all subjects tested (r   0.4, (2) perturbations in the psychologically relevant dimensions of both the fMRI data and the generative model resulted in highly consistent effects across the aligned systems (both the model and human subjects), and (3) a linear mapping, approximated using Ridge regression, performed as well as or better than all Taylor expansions we tested. Together, these results suggest a universal cognitive alignment between humans and between human-model systems. This universality holds significant potential for advancing our understanding of basic cognitive processes and offers promising new avenues for studying mental disorders.} }
Endnote
%0 Conference Paper %T On the cognitive alignment between humans and machines %A Marco Rothermel %A Sayed Soroush Daftarian %A Tahmineh A. Koosha %A Mohammad-Ali Nikouei Mahani %A Hamidreza Jamalabadi %B Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Clementine Domine %E Zorah Lähner %E Donato Crisostomi %E Luca Moschella %E Kimberly Stachenfeld %F pmlr-v285-rothermel24a %I PMLR %P 194--203 %U https://proceedings.mlr.press/v285/rothermel24a.html %V 285 %X In this paper, we explore the psychological relevance, similarity to brain representations, and subject-invariance of latent space representations in generative models. Using fMRI data from four subjects who viewed over 9,000 visual stimuli, we conducted three experiments to investigate this alignment. First, we assessed whether a linear mapping between the latent space of a generative mode, in this case a very deep VAE (VDVAE), and fMRI brain responses could accurately capture cognitive properties, specifically emotional valence, of the visual stimuli presented to both humans and machines. Second, we examined whether perturbing psychologically relevant dimensions in either the generative model or human brain data would produce corresponding cognitive effects in both systems — across models and human subjects. Third, we investigated whether a nonlinear mapping, approximated via a Taylor expansion up to the fifth degree, would outperform linear mapping in aligning cognitive properties. Our findings revealed three key insights: (1) the latent space of the generative model aligns with fMRI brain responses across all subjects tested (r   0.4, (2) perturbations in the psychologically relevant dimensions of both the fMRI data and the generative model resulted in highly consistent effects across the aligned systems (both the model and human subjects), and (3) a linear mapping, approximated using Ridge regression, performed as well as or better than all Taylor expansions we tested. Together, these results suggest a universal cognitive alignment between humans and between human-model systems. This universality holds significant potential for advancing our understanding of basic cognitive processes and offers promising new avenues for studying mental disorders.
APA
Rothermel, M., Daftarian, S.S., Koosha, T.A., Mahani, M.N. & Jamalabadi, H.. (2024). On the cognitive alignment between humans and machines. Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 285:194-203 Available from https://proceedings.mlr.press/v285/rothermel24a.html.

Related Material