Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency

Georg Bökman, David Nordström, Fredrik Kahl
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:4823-4838, 2025.

Abstract

Incorporating geometric invariance into neural networks enhances parameter efficiency but typically increases computational costs. This paper introduces new equivariant neural networks that preserve symmetry while maintaining a comparable number of floating-point operations (FLOPs) per parameter to standard non-equivariant networks. We focus on horizontal mirroring (flopping) invariance, common in many computer vision tasks. The main idea is to parametrize the feature spaces in terms of mirror-symmetric and mirror-antisymmetric features, i.e., irreps of the flopping group. This decomposes the linear layers to be block-diagonal, requiring half the number of FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a practical solution for efficient, scalable symmetry-aware architectures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-bokman25a, title = {Flopping for {FLOP}s: Leveraging Equivariance for Computational Efficiency}, author = {B\"{o}kman, Georg and Nordstr\"{o}m, David and Kahl, Fredrik}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {4823--4838}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/bokman25a/bokman25a.pdf}, url = {https://proceedings.mlr.press/v267/bokman25a.html}, abstract = {Incorporating geometric invariance into neural networks enhances parameter efficiency but typically increases computational costs. This paper introduces new equivariant neural networks that preserve symmetry while maintaining a comparable number of floating-point operations (FLOPs) per parameter to standard non-equivariant networks. We focus on horizontal mirroring (flopping) invariance, common in many computer vision tasks. The main idea is to parametrize the feature spaces in terms of mirror-symmetric and mirror-antisymmetric features, i.e., irreps of the flopping group. This decomposes the linear layers to be block-diagonal, requiring half the number of FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a practical solution for efficient, scalable symmetry-aware architectures.} }
Endnote
%0 Conference Paper %T Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency %A Georg Bökman %A David Nordström %A Fredrik Kahl %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-bokman25a %I PMLR %P 4823--4838 %U https://proceedings.mlr.press/v267/bokman25a.html %V 267 %X Incorporating geometric invariance into neural networks enhances parameter efficiency but typically increases computational costs. This paper introduces new equivariant neural networks that preserve symmetry while maintaining a comparable number of floating-point operations (FLOPs) per parameter to standard non-equivariant networks. We focus on horizontal mirroring (flopping) invariance, common in many computer vision tasks. The main idea is to parametrize the feature spaces in terms of mirror-symmetric and mirror-antisymmetric features, i.e., irreps of the flopping group. This decomposes the linear layers to be block-diagonal, requiring half the number of FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a practical solution for efficient, scalable symmetry-aware architectures.
APA
Bökman, G., Nordström, D. & Kahl, F.. (2025). Flopping for FLOPs: Leveraging Equivariance for Computational Efficiency. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:4823-4838 Available from https://proceedings.mlr.press/v267/bokman25a.html.

Related Material