Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?

Anshul Thakur, Vinayak Abrol, Pulkit Sharma
Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 197:111-121, 2023.

Abstract

This paper aims to study the geometrical structure present in a CNN filter space for investigating redundancy or importance of an individual filter. In particular, this paper analyses the convolutional layer filter space using simplical geometry to establish a relation between filter relevance and their location on the simplex. Convex combination of extremal points of a simplex can span the entire volume of the simplex. As a result, these points are inherently the most relevant components. Based on this principle, we hypothesise a notion that filters lying near these extremal points of a simplex modelling the filter space are least redundant filters and vice-versa. We validate this positional relevance hypothesis by successfully employing it for data-independent filter ranking and artificial filter fabrication in trained convolutional neural networks. The empirical analysis on different CNN architectures such as ResNet-50 and VGG-16 provide strong evidence in favour of the postulated positional relevance hypothesis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v197-thakur23a, title = {Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?}, author = {Thakur, Anshul and Abrol, Vinayak and Sharma, Pulkit}, booktitle = {Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {111--121}, year = {2023}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Di Bernardo, Arianna and Miolane, Nina}, volume = {197}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v197/thakur23a/thakur23a.pdf}, url = {https://proceedings.mlr.press/v197/thakur23a.html}, abstract = {This paper aims to study the geometrical structure present in a CNN filter space for investigating redundancy or importance of an individual filter. In particular, this paper analyses the convolutional layer filter space using simplical geometry to establish a relation between filter relevance and their location on the simplex. Convex combination of extremal points of a simplex can span the entire volume of the simplex. As a result, these points are inherently the most relevant components. Based on this principle, we hypothesise a notion that filters lying near these extremal points of a simplex modelling the filter space are least redundant filters and vice-versa. We validate this positional relevance hypothesis by successfully employing it for data-independent filter ranking and artificial filter fabrication in trained convolutional neural networks. The empirical analysis on different CNN architectures such as ResNet-50 and VGG-16 provide strong evidence in favour of the postulated positional relevance hypothesis. } }
Endnote
%0 Conference Paper %T Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information? %A Anshul Thakur %A Vinayak Abrol %A Pulkit Sharma %B Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2023 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Arianna Di Bernardo %E Nina Miolane %F pmlr-v197-thakur23a %I PMLR %P 111--121 %U https://proceedings.mlr.press/v197/thakur23a.html %V 197 %X This paper aims to study the geometrical structure present in a CNN filter space for investigating redundancy or importance of an individual filter. In particular, this paper analyses the convolutional layer filter space using simplical geometry to establish a relation between filter relevance and their location on the simplex. Convex combination of extremal points of a simplex can span the entire volume of the simplex. As a result, these points are inherently the most relevant components. Based on this principle, we hypothesise a notion that filters lying near these extremal points of a simplex modelling the filter space are least redundant filters and vice-versa. We validate this positional relevance hypothesis by successfully employing it for data-independent filter ranking and artificial filter fabrication in trained convolutional neural networks. The empirical analysis on different CNN architectures such as ResNet-50 and VGG-16 provide strong evidence in favour of the postulated positional relevance hypothesis.
APA
Thakur, A., Abrol, V. & Sharma, P.. (2023). Does Geometric Structure in Convolutional Filter Space Provide Filter Redundancy Information?. Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 197:111-121 Available from https://proceedings.mlr.press/v197/thakur23a.html.

Related Material