Duality of Bures and Shape Distances with Implications for Comparing Neural Representations

Sarah E. Harvey, Brett W. Larsen, Alex H. Williams
Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, PMLR 243:11-26, 2024.

Abstract

A multitude of (dis)similarity measures between neural networks representations have been proposed, resulting in a fragmented research landscape. Most (dis)similarity measures fall into one of two categories. First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity while accounting for expected invariances. Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics that are already invariant to such symmetries (e.g. by comparing stimulus-by-stimulus kernel matrices). Here, we take steps towards unifying these two broad categories of methods by observing that the cosine of the Riemannian shape distance (from category 1) is equal to NBS (from category 2). We explore how this connection leads to new interpretations of shape distances and NBS, and draw contrasts of these measures with CKA, a popular similarity measure in the deep learning literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v243-harvey24a, title = {Duality of Bures and Shape Distances with Implications for Comparing Neural Representations}, author = {Harvey, Sarah E. and Larsen, Brett W. and Williams, Alex H.}, booktitle = {Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models}, pages = {11--26}, year = {2024}, editor = {Fumero, Marco and Rodol√°, Emanuele and Domine, Clementine and Locatello, Francesco and Dziugaite, Karolina and Mathilde, Caron}, volume = {243}, series = {Proceedings of Machine Learning Research}, month = {15 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v243/harvey24a/harvey24a.pdf}, url = {https://proceedings.mlr.press/v243/harvey24a.html}, abstract = {A multitude of (dis)similarity measures between neural networks representations have been proposed, resulting in a fragmented research landscape. Most (dis)similarity measures fall into one of two categories. First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity while accounting for expected invariances. Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics that are already invariant to such symmetries (e.g. by comparing stimulus-by-stimulus kernel matrices). Here, we take steps towards unifying these two broad categories of methods by observing that the cosine of the Riemannian shape distance (from category 1) is equal to NBS (from category 2). We explore how this connection leads to new interpretations of shape distances and NBS, and draw contrasts of these measures with CKA, a popular similarity measure in the deep learning literature.} }
Endnote
%0 Conference Paper %T Duality of Bures and Shape Distances with Implications for Comparing Neural Representations %A Sarah E. Harvey %A Brett W. Larsen %A Alex H. Williams %B Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Emanuele Rodol√° %E Clementine Domine %E Francesco Locatello %E Karolina Dziugaite %E Caron Mathilde %F pmlr-v243-harvey24a %I PMLR %P 11--26 %U https://proceedings.mlr.press/v243/harvey24a.html %V 243 %X A multitude of (dis)similarity measures between neural networks representations have been proposed, resulting in a fragmented research landscape. Most (dis)similarity measures fall into one of two categories. First, measures such as linear regression, canonical correlations analysis (CCA), and shape distances, all learn explicit mappings between neural units to quantify similarity while accounting for expected invariances. Second, measures such as representational similarity analysis (RSA), centered kernel alignment (CKA), and normalized Bures similarity (NBS) all quantify similarity in summary statistics that are already invariant to such symmetries (e.g. by comparing stimulus-by-stimulus kernel matrices). Here, we take steps towards unifying these two broad categories of methods by observing that the cosine of the Riemannian shape distance (from category 1) is equal to NBS (from category 2). We explore how this connection leads to new interpretations of shape distances and NBS, and draw contrasts of these measures with CKA, a popular similarity measure in the deep learning literature.
APA
Harvey, S.E., Larsen, B.W. & Williams, A.H.. (2024). Duality of Bures and Shape Distances with Implications for Comparing Neural Representations. Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 243:11-26 Available from https://proceedings.mlr.press/v243/harvey24a.html.

Related Material