Do neural networks trained with topological features learn different internal representations?

Sarah McGuire, Shane Jackson, Tegan Emerson, Henry Kvinge
Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 197:122-136, 2023.

Abstract

There is a growing body of work that leverages features extracted via topological data analysis to train machine learning models. While this field, sometimes known as topological machine learning (TML), has seen some notable successes, an understanding of how the process of learning from topological features differs from the process of learning from raw data is still limited. In this work, we begin to address one component of this larger issue by asking whether a model trained with topological features learns internal representations of data that are fundamentally different than those learned by a model trained with the original raw data. To quantify “different”, we exploit two popular metrics that can be used to measure the similarity of the hidden representations of data within neural networks, neural stitching and centered kernel alignment. From these we draw a range of conclusions about how training with topological features does and does not change the representations that a model learns. Perhaps unsurprisingly, we find that structurally, the hidden representations of models trained and evaluated on topological features differ substantially compared to those trained and evaluated on the corresponding raw data. On the other hand, our experiments show that in some cases, these representations can be reconciled (at least to the degree required to solve the corresponding task) using a simple affine transformation. We conjecture that this means that neural networks trained on raw data may extract some limited topological features in the process of making predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v197-mcguire23a, title = {Do neural networks trained with topological features learn different internal representations?}, author = {McGuire, Sarah and Jackson, Shane and Emerson, Tegan and Kvinge, Henry}, booktitle = {Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {122--136}, year = {2023}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Di Bernardo, Arianna and Miolane, Nina}, volume = {197}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v197/mcguire23a/mcguire23a.pdf}, url = {https://proceedings.mlr.press/v197/mcguire23a.html}, abstract = {There is a growing body of work that leverages features extracted via topological data analysis to train machine learning models. While this field, sometimes known as topological machine learning (TML), has seen some notable successes, an understanding of how the process of learning from topological features differs from the process of learning from raw data is still limited. In this work, we begin to address one component of this larger issue by asking whether a model trained with topological features learns internal representations of data that are fundamentally different than those learned by a model trained with the original raw data. To quantify “different”, we exploit two popular metrics that can be used to measure the similarity of the hidden representations of data within neural networks, neural stitching and centered kernel alignment. From these we draw a range of conclusions about how training with topological features does and does not change the representations that a model learns. Perhaps unsurprisingly, we find that structurally, the hidden representations of models trained and evaluated on topological features differ substantially compared to those trained and evaluated on the corresponding raw data. On the other hand, our experiments show that in some cases, these representations can be reconciled (at least to the degree required to solve the corresponding task) using a simple affine transformation. We conjecture that this means that neural networks trained on raw data may extract some limited topological features in the process of making predictions.} }
Endnote
%0 Conference Paper %T Do neural networks trained with topological features learn different internal representations? %A Sarah McGuire %A Shane Jackson %A Tegan Emerson %A Henry Kvinge %B Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2023 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Arianna Di Bernardo %E Nina Miolane %F pmlr-v197-mcguire23a %I PMLR %P 122--136 %U https://proceedings.mlr.press/v197/mcguire23a.html %V 197 %X There is a growing body of work that leverages features extracted via topological data analysis to train machine learning models. While this field, sometimes known as topological machine learning (TML), has seen some notable successes, an understanding of how the process of learning from topological features differs from the process of learning from raw data is still limited. In this work, we begin to address one component of this larger issue by asking whether a model trained with topological features learns internal representations of data that are fundamentally different than those learned by a model trained with the original raw data. To quantify “different”, we exploit two popular metrics that can be used to measure the similarity of the hidden representations of data within neural networks, neural stitching and centered kernel alignment. From these we draw a range of conclusions about how training with topological features does and does not change the representations that a model learns. Perhaps unsurprisingly, we find that structurally, the hidden representations of models trained and evaluated on topological features differ substantially compared to those trained and evaluated on the corresponding raw data. On the other hand, our experiments show that in some cases, these representations can be reconciled (at least to the degree required to solve the corresponding task) using a simple affine transformation. We conjecture that this means that neural networks trained on raw data may extract some limited topological features in the process of making predictions.
APA
McGuire, S., Jackson, S., Emerson, T. & Kvinge, H.. (2023). Do neural networks trained with topological features learn different internal representations?. Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 197:122-136 Available from https://proceedings.mlr.press/v197/mcguire23a.html.

Related Material