Convolutional Persistence as a Remedy to Neural Model Analysis
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:10839-10855, 2023.
While deep neural networks are proven to be effective learning systems, their analysis is complex due to the high-dimensionality of their weight space. Persistent topological properties can be used as an additional descriptor, providing insights on how the network weights evolve during training. In this paper, we focus on convolutional neural networks, and define the topology of the space, populated by convolutional filters (i.e., kernels). We perform an extensive analysis of topological properties of the convolutional filters. Specifically, we define a metric based on persistent homology, namely, Convolutional Topology Representation, to determine an important factor in neural networks training: the generalizability of the model to the test set. We further analyse how various training methods affect the topology of convolutional layers.