What Representational Similarity Measures Imply about Decodable Information

Sarah E Harvey, David Lipshutz, Alex H Williams
Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, PMLR 285:140-151, 2024.

Abstract

Neural responses encode information that is useful for a variety of downstream tasks. A common approach to understand these systems is to build regression models or decoders that reconstruct features of the stimulus from neural responses. Here, we investigate how to leverage this perspective to quantify the similarity of different neural systems. This is distinct from typical motivations behind neural network similarity measures like centered kernel alignment (CKA), canonical correlation analysis (CCA), and Procrustes shape distance, which highlight geometric intuition and invariances to orthogonal or affine transformations. We show that CKA, CCA, and other measures can be equivalently motivated from similarity in decoding patterns. Specifically, these measures quantify the average alignment between optimal linear readouts across a distribution of decoding tasks. We also show that the Procrustes shape distance upper bounds the distance between optimal linear readouts and that the converse holds for representations with low participation ratio. Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information. This perspective suggests new ways of measuring similarity between neural systems and also provides novel, unifying interpretations of existing measures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v285-harvey24a, title = {What Representational Similarity Measures Imply about Decodable Information}, author = {Harvey, Sarah E and Lipshutz, David and Williams, Alex H}, booktitle = {Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models}, pages = {140--151}, year = {2024}, editor = {Fumero, Marco and Domine, Clementine and Lähner, Zorah and Crisostomi, Donato and Moschella, Luca and Stachenfeld, Kimberly}, volume = {285}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v285/main/assets/harvey24a/harvey24a.pdf}, url = {https://proceedings.mlr.press/v285/harvey24a.html}, abstract = {Neural responses encode information that is useful for a variety of downstream tasks. A common approach to understand these systems is to build regression models or decoders that reconstruct features of the stimulus from neural responses. Here, we investigate how to leverage this perspective to quantify the similarity of different neural systems. This is distinct from typical motivations behind neural network similarity measures like centered kernel alignment (CKA), canonical correlation analysis (CCA), and Procrustes shape distance, which highlight geometric intuition and invariances to orthogonal or affine transformations. We show that CKA, CCA, and other measures can be equivalently motivated from similarity in decoding patterns. Specifically, these measures quantify the average alignment between optimal linear readouts across a distribution of decoding tasks. We also show that the Procrustes shape distance upper bounds the distance between optimal linear readouts and that the converse holds for representations with low participation ratio. Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information. This perspective suggests new ways of measuring similarity between neural systems and also provides novel, unifying interpretations of existing measures.} }
Endnote
%0 Conference Paper %T What Representational Similarity Measures Imply about Decodable Information %A Sarah E Harvey %A David Lipshutz %A Alex H Williams %B Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Clementine Domine %E Zorah Lähner %E Donato Crisostomi %E Luca Moschella %E Kimberly Stachenfeld %F pmlr-v285-harvey24a %I PMLR %P 140--151 %U https://proceedings.mlr.press/v285/harvey24a.html %V 285 %X Neural responses encode information that is useful for a variety of downstream tasks. A common approach to understand these systems is to build regression models or decoders that reconstruct features of the stimulus from neural responses. Here, we investigate how to leverage this perspective to quantify the similarity of different neural systems. This is distinct from typical motivations behind neural network similarity measures like centered kernel alignment (CKA), canonical correlation analysis (CCA), and Procrustes shape distance, which highlight geometric intuition and invariances to orthogonal or affine transformations. We show that CKA, CCA, and other measures can be equivalently motivated from similarity in decoding patterns. Specifically, these measures quantify the average alignment between optimal linear readouts across a distribution of decoding tasks. We also show that the Procrustes shape distance upper bounds the distance between optimal linear readouts and that the converse holds for representations with low participation ratio. Overall, our work demonstrates a tight link between the geometry of neural representations and the ability to linearly decode information. This perspective suggests new ways of measuring similarity between neural systems and also provides novel, unifying interpretations of existing measures.
APA
Harvey, S.E., Lipshutz, D. & Williams, A.H.. (2024). What Representational Similarity Measures Imply about Decodable Information. Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 285:140-151 Available from https://proceedings.mlr.press/v285/harvey24a.html.

Related Material