A General Method for Testing Bayesian Models using Neural Data

Gabor Lengyel, Sabyasachi Shivkumar, Ralf M Haefner
Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, PMLR 243:144-157, 2024.

Abstract

Bayesian models have been successful in explaining human and animal behavior, but the extent to which they can also explain neural activity is still an open question. A major obstacle to answering this question is that current methods for generating neural predictions require detailed and specific assumptions about the encoding of posterior beliefs in neural responses, with no consensus or decisive data about the nature of this encoding. Here, we present a new method and prove conditions for its validity, that overcomes these challenges for a wide class of probabilistic encodings – including the two major classes of neural sampling and distributed distributional codes. Our method tests whether the relationships between the model posteriors for different stimuli match the relationships between the corresponding neural responses – akin to representational similarity analysis (RSA), a widely used method for nonprobabilistic models. Finally, we present a new model comparison diagnostic for our method, based not on the agreement of the model with the data directly, but on the alignment of the model and data when injecting noise in our neural prediction generation method. We illustrate our method using simulated V1 data and compare two Bayesian models that are practically indistinguishable using behavior alone. Our results show a powerful new way to rigorously test Bayesian models on neural data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v243-lengyel24a, title = {A General Method for Testing Bayesian Models using Neural Data}, author = {Lengyel, Gabor and Shivkumar, Sabyasachi and Haefner, Ralf M}, booktitle = {Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models}, pages = {144--157}, year = {2024}, editor = {Fumero, Marco and Rodolá, Emanuele and Domine, Clementine and Locatello, Francesco and Dziugaite, Karolina and Mathilde, Caron}, volume = {243}, series = {Proceedings of Machine Learning Research}, month = {15 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v243/lengyel24a/lengyel24a.pdf}, url = {https://proceedings.mlr.press/v243/lengyel24a.html}, abstract = {Bayesian models have been successful in explaining human and animal behavior, but the extent to which they can also explain neural activity is still an open question. A major obstacle to answering this question is that current methods for generating neural predictions require detailed and specific assumptions about the encoding of posterior beliefs in neural responses, with no consensus or decisive data about the nature of this encoding. Here, we present a new method and prove conditions for its validity, that overcomes these challenges for a wide class of probabilistic encodings – including the two major classes of neural sampling and distributed distributional codes. Our method tests whether the relationships between the model posteriors for different stimuli match the relationships between the corresponding neural responses – akin to representational similarity analysis (RSA), a widely used method for nonprobabilistic models. Finally, we present a new model comparison diagnostic for our method, based not on the agreement of the model with the data directly, but on the alignment of the model and data when injecting noise in our neural prediction generation method. We illustrate our method using simulated V1 data and compare two Bayesian models that are practically indistinguishable using behavior alone. Our results show a powerful new way to rigorously test Bayesian models on neural data.} }
Endnote
%0 Conference Paper %T A General Method for Testing Bayesian Models using Neural Data %A Gabor Lengyel %A Sabyasachi Shivkumar %A Ralf M Haefner %B Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Emanuele Rodolá %E Clementine Domine %E Francesco Locatello %E Karolina Dziugaite %E Caron Mathilde %F pmlr-v243-lengyel24a %I PMLR %P 144--157 %U https://proceedings.mlr.press/v243/lengyel24a.html %V 243 %X Bayesian models have been successful in explaining human and animal behavior, but the extent to which they can also explain neural activity is still an open question. A major obstacle to answering this question is that current methods for generating neural predictions require detailed and specific assumptions about the encoding of posterior beliefs in neural responses, with no consensus or decisive data about the nature of this encoding. Here, we present a new method and prove conditions for its validity, that overcomes these challenges for a wide class of probabilistic encodings – including the two major classes of neural sampling and distributed distributional codes. Our method tests whether the relationships between the model posteriors for different stimuli match the relationships between the corresponding neural responses – akin to representational similarity analysis (RSA), a widely used method for nonprobabilistic models. Finally, we present a new model comparison diagnostic for our method, based not on the agreement of the model with the data directly, but on the alignment of the model and data when injecting noise in our neural prediction generation method. We illustrate our method using simulated V1 data and compare two Bayesian models that are practically indistinguishable using behavior alone. Our results show a powerful new way to rigorously test Bayesian models on neural data.
APA
Lengyel, G., Shivkumar, S. & Haefner, R.M.. (2024). A General Method for Testing Bayesian Models using Neural Data. Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 243:144-157 Available from https://proceedings.mlr.press/v243/lengyel24a.html.

Related Material