On Fairly Comparing Group Equivariant Networks

Lucas Roos, Steve Kroon
Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM), PMLR 251:303-317, 2024.

Abstract

This paper investigates the flexibility of Group Equivariant Convolutional Neural Networks (G-CNNs), which specialize conventional neural networks by encoding equivariance to group transformations. Inspired by splines, we propose new metrics to assess the complexity of ReLU networks and use them to quantify and compare the flexibility of networks equivariant to different groups. Our analysis suggests that the current practice of comparing networks by fixing the number of trainable parameters unfairly affords models equivariant to larger groups additional expressivity. Instead, we advocate for comparisons based on a fixed computational budget—which we empirically show results in more similar levels of network flexibility. This approach allows one to better disentangle the impact of constraining networks to be equivariant from the increased expressivity they are typically granted in the literature, enabling one to obtain a more nuanced view of the impact of enforcing equivariance. Interestingly, our experiments indicate that enforcing equivariance results in more complex fitted functions even when controlling for compute, despite reducing network expressivity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v251-roos24a, title = {On Fairly Comparing Group Equivariant Networks}, author = {Roos, Lucas and Kroon, Steve}, booktitle = {Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)}, pages = {303--317}, year = {2024}, editor = {Vadgama, Sharvaree and Bekkers, Erik and Pouplin, Alison and Kaba, Sekou-Oumar and Walters, Robin and Lawrence, Hannah and Emerson, Tegan and Kvinge, Henry and Tomczak, Jakub and Jegelka, Stephanie}, volume = {251}, series = {Proceedings of Machine Learning Research}, month = {29 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v251/main/assets/roos24a/roos24a.pdf}, url = {https://proceedings.mlr.press/v251/roos24a.html}, abstract = {This paper investigates the flexibility of Group Equivariant Convolutional Neural Networks (G-CNNs), which specialize conventional neural networks by encoding equivariance to group transformations. Inspired by splines, we propose new metrics to assess the complexity of ReLU networks and use them to quantify and compare the flexibility of networks equivariant to different groups. Our analysis suggests that the current practice of comparing networks by fixing the number of trainable parameters unfairly affords models equivariant to larger groups additional expressivity. Instead, we advocate for comparisons based on a fixed computational budget—which we empirically show results in more similar levels of network flexibility. This approach allows one to better disentangle the impact of constraining networks to be equivariant from the increased expressivity they are typically granted in the literature, enabling one to obtain a more nuanced view of the impact of enforcing equivariance. Interestingly, our experiments indicate that enforcing equivariance results in more complex fitted functions even when controlling for compute, despite reducing network expressivity.} }
Endnote
%0 Conference Paper %T On Fairly Comparing Group Equivariant Networks %A Lucas Roos %A Steve Kroon %B Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM) %C Proceedings of Machine Learning Research %D 2024 %E Sharvaree Vadgama %E Erik Bekkers %E Alison Pouplin %E Sekou-Oumar Kaba %E Robin Walters %E Hannah Lawrence %E Tegan Emerson %E Henry Kvinge %E Jakub Tomczak %E Stephanie Jegelka %F pmlr-v251-roos24a %I PMLR %P 303--317 %U https://proceedings.mlr.press/v251/roos24a.html %V 251 %X This paper investigates the flexibility of Group Equivariant Convolutional Neural Networks (G-CNNs), which specialize conventional neural networks by encoding equivariance to group transformations. Inspired by splines, we propose new metrics to assess the complexity of ReLU networks and use them to quantify and compare the flexibility of networks equivariant to different groups. Our analysis suggests that the current practice of comparing networks by fixing the number of trainable parameters unfairly affords models equivariant to larger groups additional expressivity. Instead, we advocate for comparisons based on a fixed computational budget—which we empirically show results in more similar levels of network flexibility. This approach allows one to better disentangle the impact of constraining networks to be equivariant from the increased expressivity they are typically granted in the literature, enabling one to obtain a more nuanced view of the impact of enforcing equivariance. Interestingly, our experiments indicate that enforcing equivariance results in more complex fitted functions even when controlling for compute, despite reducing network expressivity.
APA
Roos, L. & Kroon, S.. (2024). On Fairly Comparing Group Equivariant Networks. Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM), in Proceedings of Machine Learning Research 251:303-317 Available from https://proceedings.mlr.press/v251/roos24a.html.

Related Material