Sparse Convolutions on Lie Groups

Tycho F. A. van der Ouderaa, Mark van der Wilk
Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 197:48-62, 2023.

Abstract

Convolutional neural networks have proven very successful for a wide range of modelling tasks. Convolutional layers embed equivariance to discrete translations into the architectural structure of neural networks. Extensions have generalised continuous Lie groups beyond translation, such as rotation, scale or more complex symmetries. Other works have allowed for relaxed equivariance constraints to better model data that does not fully respect symmetries while still leveraging on useful inductive biases that equivariances provide. How continuous convolutional filters on Lie groups can best be parameterised remains an open question. To parameterise sufficiently flexible continuous filters, small MLP hypernetworks are often used in practice. Although this works, it typically introduces many additional model parameters. To be more parameter-efficient, we propose an alternative approach and define continuous filters with a small finite set of basis functions through anchor points. Regular convolutional layers appear as a special case, allowing for practical conversion between regular filters and our basis function filter formulation, at equal memory complexity. The basis function filters enable efficient construction of neural network architectures with equivariance or relaxed equivariance, outperforming baselines on vision classification tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v197-ouderaa23a, title = {Sparse Convolutions on Lie Groups}, author = {van der Ouderaa, Tycho F. A. and van der Wilk, Mark}, booktitle = {Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {48--62}, year = {2023}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Di Bernardo, Arianna and Miolane, Nina}, volume = {197}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v197/ouderaa23a/ouderaa23a.pdf}, url = {https://proceedings.mlr.press/v197/ouderaa23a.html}, abstract = {Convolutional neural networks have proven very successful for a wide range of modelling tasks. Convolutional layers embed equivariance to discrete translations into the architectural structure of neural networks. Extensions have generalised continuous Lie groups beyond translation, such as rotation, scale or more complex symmetries. Other works have allowed for relaxed equivariance constraints to better model data that does not fully respect symmetries while still leveraging on useful inductive biases that equivariances provide. How continuous convolutional filters on Lie groups can best be parameterised remains an open question. To parameterise sufficiently flexible continuous filters, small MLP hypernetworks are often used in practice. Although this works, it typically introduces many additional model parameters. To be more parameter-efficient, we propose an alternative approach and define continuous filters with a small finite set of basis functions through anchor points. Regular convolutional layers appear as a special case, allowing for practical conversion between regular filters and our basis function filter formulation, at equal memory complexity. The basis function filters enable efficient construction of neural network architectures with equivariance or relaxed equivariance, outperforming baselines on vision classification tasks.} }
Endnote
%0 Conference Paper %T Sparse Convolutions on Lie Groups %A Tycho F. A. van der Ouderaa %A Mark van der Wilk %B Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2023 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Arianna Di Bernardo %E Nina Miolane %F pmlr-v197-ouderaa23a %I PMLR %P 48--62 %U https://proceedings.mlr.press/v197/ouderaa23a.html %V 197 %X Convolutional neural networks have proven very successful for a wide range of modelling tasks. Convolutional layers embed equivariance to discrete translations into the architectural structure of neural networks. Extensions have generalised continuous Lie groups beyond translation, such as rotation, scale or more complex symmetries. Other works have allowed for relaxed equivariance constraints to better model data that does not fully respect symmetries while still leveraging on useful inductive biases that equivariances provide. How continuous convolutional filters on Lie groups can best be parameterised remains an open question. To parameterise sufficiently flexible continuous filters, small MLP hypernetworks are often used in practice. Although this works, it typically introduces many additional model parameters. To be more parameter-efficient, we propose an alternative approach and define continuous filters with a small finite set of basis functions through anchor points. Regular convolutional layers appear as a special case, allowing for practical conversion between regular filters and our basis function filter formulation, at equal memory complexity. The basis function filters enable efficient construction of neural network architectures with equivariance or relaxed equivariance, outperforming baselines on vision classification tasks.
APA
van der Ouderaa, T.F.A. & van der Wilk, M.. (2023). Sparse Convolutions on Lie Groups. Proceedings of the 1st NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 197:48-62 Available from https://proceedings.mlr.press/v197/ouderaa23a.html.

Related Material