Convolutional Persistence as a Remedy to Neural Model Analysis

Ekaterina Khramtsova, Guido Zuccon, Xi Wang, Mahsa Baktashmotlagh
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:10839-10855, 2023.

Abstract

While deep neural networks are proven to be effective learning systems, their analysis is complex due to the high-dimensionality of their weight space. Persistent topological properties can be used as an additional descriptor, providing insights on how the network weights evolve during training. In this paper, we focus on convolutional neural networks, and define the topology of the space, populated by convolutional filters (i.e., kernels). We perform an extensive analysis of topological properties of the convolutional filters. Specifically, we define a metric based on persistent homology, namely, Convolutional Topology Representation, to determine an important factor in neural networks training: the generalizability of the model to the test set. We further analyse how various training methods affect the topology of convolutional layers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-khramtsova23a, title = {Convolutional Persistence as a Remedy to Neural Model Analysis}, author = {Khramtsova, Ekaterina and Zuccon, Guido and Wang, Xi and Baktashmotlagh, Mahsa}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {10839--10855}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/khramtsova23a/khramtsova23a.pdf}, url = {https://proceedings.mlr.press/v206/khramtsova23a.html}, abstract = {While deep neural networks are proven to be effective learning systems, their analysis is complex due to the high-dimensionality of their weight space. Persistent topological properties can be used as an additional descriptor, providing insights on how the network weights evolve during training. In this paper, we focus on convolutional neural networks, and define the topology of the space, populated by convolutional filters (i.e., kernels). We perform an extensive analysis of topological properties of the convolutional filters. Specifically, we define a metric based on persistent homology, namely, Convolutional Topology Representation, to determine an important factor in neural networks training: the generalizability of the model to the test set. We further analyse how various training methods affect the topology of convolutional layers.} }
Endnote
%0 Conference Paper %T Convolutional Persistence as a Remedy to Neural Model Analysis %A Ekaterina Khramtsova %A Guido Zuccon %A Xi Wang %A Mahsa Baktashmotlagh %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-khramtsova23a %I PMLR %P 10839--10855 %U https://proceedings.mlr.press/v206/khramtsova23a.html %V 206 %X While deep neural networks are proven to be effective learning systems, their analysis is complex due to the high-dimensionality of their weight space. Persistent topological properties can be used as an additional descriptor, providing insights on how the network weights evolve during training. In this paper, we focus on convolutional neural networks, and define the topology of the space, populated by convolutional filters (i.e., kernels). We perform an extensive analysis of topological properties of the convolutional filters. Specifically, we define a metric based on persistent homology, namely, Convolutional Topology Representation, to determine an important factor in neural networks training: the generalizability of the model to the test set. We further analyse how various training methods affect the topology of convolutional layers.
APA
Khramtsova, E., Zuccon, G., Wang, X. & Baktashmotlagh, M.. (2023). Convolutional Persistence as a Remedy to Neural Model Analysis. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:10839-10855 Available from https://proceedings.mlr.press/v206/khramtsova23a.html.

Related Material