Comparing Representations in Static and Dynamic Vision Models to the Human Brain

Hamed Karimi, Stefano Anzellotti
Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, PMLR 285:282-295, 2024.

Abstract

We compared neural responses to naturalistic videos and representations in deep network models trained with static and dynamic information. Models trained with dynamic information showed greater correspondence with neural representations in all brain regions, including those previously associated with the processing of static information. Among the models trained with dynamic information, those based on optic flow accounted for unique variance in neural responses that were not captured by Masked Autoencoders. This effect was strongest in ventral and dorsal brain regions, indicating that despite the Masked Autoencoders’ effectiveness at a variety of tasks, their representations diverge from representations in the human brain in the early stages of visual processing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v285-karimi24a, title = {Comparing Representations in Static and Dynamic Vision Models to the Human Brain}, author = {Karimi, Hamed and Anzellotti, Stefano}, booktitle = {Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models}, pages = {282--295}, year = {2024}, editor = {Fumero, Marco and Domine, Clementine and Lähner, Zorah and Crisostomi, Donato and Moschella, Luca and Stachenfeld, Kimberly}, volume = {285}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v285/main/assets/karimi24a/karimi24a.pdf}, url = {https://proceedings.mlr.press/v285/karimi24a.html}, abstract = {We compared neural responses to naturalistic videos and representations in deep network models trained with static and dynamic information. Models trained with dynamic information showed greater correspondence with neural representations in all brain regions, including those previously associated with the processing of static information. Among the models trained with dynamic information, those based on optic flow accounted for unique variance in neural responses that were not captured by Masked Autoencoders. This effect was strongest in ventral and dorsal brain regions, indicating that despite the Masked Autoencoders’ effectiveness at a variety of tasks, their representations diverge from representations in the human brain in the early stages of visual processing.} }
Endnote
%0 Conference Paper %T Comparing Representations in Static and Dynamic Vision Models to the Human Brain %A Hamed Karimi %A Stefano Anzellotti %B Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Clementine Domine %E Zorah Lähner %E Donato Crisostomi %E Luca Moschella %E Kimberly Stachenfeld %F pmlr-v285-karimi24a %I PMLR %P 282--295 %U https://proceedings.mlr.press/v285/karimi24a.html %V 285 %X We compared neural responses to naturalistic videos and representations in deep network models trained with static and dynamic information. Models trained with dynamic information showed greater correspondence with neural representations in all brain regions, including those previously associated with the processing of static information. Among the models trained with dynamic information, those based on optic flow accounted for unique variance in neural responses that were not captured by Masked Autoencoders. This effect was strongest in ventral and dorsal brain regions, indicating that despite the Masked Autoencoders’ effectiveness at a variety of tasks, their representations diverge from representations in the human brain in the early stages of visual processing.
APA
Karimi, H. & Anzellotti, S.. (2024). Comparing Representations in Static and Dynamic Vision Models to the Human Brain. Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 285:282-295 Available from https://proceedings.mlr.press/v285/karimi24a.html.

Related Material