Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs

Saro Passaro, C. Lawrence Zitnick
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:27420-27438, 2023.

Abstract

Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be SO(3) equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the SO(3) convolutions or tensor products to mathematically equivalent convolutions in SO(2) . This is accomplished by aligning the node embeddings’ primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from O(L6) to O(L3), where L is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-passaro23a, title = {Reducing {SO}(3) Convolutions to {SO}(2) for Efficient Equivariant {GNN}s}, author = {Passaro, Saro and Zitnick, C. Lawrence}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {27420--27438}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/passaro23a/passaro23a.pdf}, url = {https://proceedings.mlr.press/v202/passaro23a.html}, abstract = {Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be $SO(3)$ equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the $SO(3)$ convolutions or tensor products to mathematically equivalent convolutions in $SO(2)$ . This is accomplished by aligning the node embeddings’ primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from $O(L^6)$ to $O(L^3)$, where $L$ is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.} }
Endnote
%0 Conference Paper %T Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs %A Saro Passaro %A C. Lawrence Zitnick %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-passaro23a %I PMLR %P 27420--27438 %U https://proceedings.mlr.press/v202/passaro23a.html %V 202 %X Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be $SO(3)$ equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the $SO(3)$ convolutions or tensor products to mathematically equivalent convolutions in $SO(2)$ . This is accomplished by aligning the node embeddings’ primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from $O(L^6)$ to $O(L^3)$, where $L$ is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.
APA
Passaro, S. & Zitnick, C.L.. (2023). Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:27420-27438 Available from https://proceedings.mlr.press/v202/passaro23a.html.

Related Material