
- title: 'Preface to Geometry-grounded Representation Learning and Generative Modeling (GRaM) Workshop'
  abstract: 'The Geometry-grounded Representation Learning and Generative Modeling (GRaM) workshop at ICLR 2024 explored the concept of geometric grounding. A representation, method, or theory is grounded in geometry if it can be amenable to geometric reasoning, that is, it abides by the mathematics of geometry. This idea plays a crucial role in developing generative models that understand geometry and can aid in geometric representations. We explored many different aspects of geometric representations at the GRaM Workshop.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/vadgama24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/vadgama24a/vadgama24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-vadgama24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 1-6
  id: vadgama24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 1
  lastpage: 6
  published: 2024-10-12 00:00:00 +0000
- title: 'SE(3)-Hyena Operator for Scalable Equivariant Learning'
  abstract: 'Modeling global geometric context while maintaining equivariance is crucial for accurate predictions in many fields such as biology, chemistry, or vision. Yet, this is challenging due to the computational demands of processing high-dimensional data at scale. Existing approaches such as equivariant self-attention or distance-based message passing, suffer from quadratic complexity with respect to sequence length, while localized methods sacrifice global information. Inspired by the recent success of state-space and long-convolutional models, in this work, we introduce SE(3)-Hyena operator, an equivariant long-convolutional model based on the Hyena operator. The SE(3)-Hyena captures global geometric context at sub-quadratic complexity while maintaining equivariance to rotations and translations. Evaluated on equivariant associative recall and n-body modeling, SE(3)-Hyena matches or outperforms equivariant self-attention while requiring significantly less memory and computational resources for long sequences. Our model processes the geometric context of 20k tokens x3.5 times faster than the equivariant transformer and allows x175 longer a context within the same memory budget.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/moskalev24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/moskalev24a/moskalev24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-moskalev24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Artem
    family: Moskalev
  - given: Mangal
    family: Prakash
  - given: Rui
    family: Liao
  - given: Tommaso
    family: Mansi
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 7-19
  id: moskalev24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 7
  lastpage: 19
  published: 2024-10-12 00:00:00 +0000
- title: 'Topology-Informed Graph Transformer'
  abstract: 'Transformers, through their self-attention mechanisms, have revolutionized performance in Natural Language Processing and Vision. Recently,there has been increasing interest in integrating Transformers with Graph Neural Networks (GNNs) to enhance analyzing geometric properties of graphs by employing global attention mechanisms. A key challenge in improving graph transformers is enhancing their ability to distinguish between isomorphic graphs, which can potentially boost their predictive performance. To address this challenge, we introduce ’Topology-Informed Graph Transformer (TIGT)’, a novel transformer enhancing both discriminative power in detecting graph isomorphisms and the overall performance of Graph Transformers. TIGT consists of four components: (1) a topological positional embedding layer using non-isomorphic universal covers based on cyclic subgraphs of graphs to ensure unique graph representation, (2) a dual-path message-passing layer to explicitly encode topological characteristics throughout the encoder layers, (3) a global attention mechanism, and (4) a graph information layer to recalibrate channel-wise graph features for improved feature representation. TIGT outperforms previous Graph Transformers in classifying synthetic dataset aimed at distinguishing isomorphism classes of graphs. Additionally, mathematical analysis and empirical evaluations highlight our model’s competitive edge over state-of-the-art Graph Transformers across various benchmark datasets.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/choi24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/choi24a/choi24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-choi24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Yun Young
    family: Choi
  - given: Sun Woo
    family: Park
  - given: Minho
    family: Lee
  - given: Youngho
    family: Woo
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 20-34
  id: choi24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 20
  lastpage: 34
  published: 2024-10-12 00:00:00 +0000
- title: 'Alignment of MPNNs and Graph Transformers'
  abstract: 'As the complexity of machine learning (ML) model architectures increases, it is important to understand to what degree simpler and more efficient architectures can align with their complex counterparts. In this paper, we investigate the degree to which a Message Passing Neural Network (MPNN) can operate similarly to a Graph Transformer. We do this by training an MPNN to align with the intermediate embeddings of a Relational Transformer (RT). Throughout this process, we explore variations of the standard MPNN and assess the impact of different components on the degree of alignment. Our findings suggest that an MPNN can align to RT and the most important components that affect the alignment are the MPNN’s permutation invariant aggregation function, virtual node and layer normalisation.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/nguyen24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/nguyen24a/nguyen24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-nguyen24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Bao
    family: Nguyen
  - given: Anjana
    family: Yodaiken
  - given: Petar
    family: Veličković
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 35-49
  id: nguyen24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 35
  lastpage: 49
  published: 2024-10-12 00:00:00 +0000
- title: 'Stability Analysis of Equivariant Convolutional Representations Through The Lens of Equivariant Multi-layered CKNs'
  abstract: 'In this paper we construct and theoretically analyse group equivariant convolutional kernel networks (CKNs) which are useful in understanding the geometry of (equivariant) CNNs through the lens of reproducing kernel Hilbert spaces (RKHSs). We then proceed to study the stability analysis of such equiv-CKNs under the action of diffeomorphism and draw a connection with equiv-CNNs, where the goal is to analyse the geometry of inductive biases of equiv-CNNs through the lens of reproducing kernel Hilbert spaces (RKHSs). Traditional deep learning architectures, including CNNs, trained with sophisticated optimization algorithms is vulnerable to perturbations, including ‘adversarial examples’. Understanding the RKHS norm of such models through CKNs is useful in designing the appropriate architecture and can be useful in designing robust equivariant representation learning models.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/chowdhury24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/chowdhury24a/chowdhury24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-chowdhury24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Soutrik Roy
    family: Chowdhury
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 50-64
  id: chowdhury24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 50
  lastpage: 64
  published: 2024-10-12 00:00:00 +0000
- title: 'Asynchrony Invariance Loss Functions for Graph Neural Networks'
  abstract: 'A ubiquitous class of graph neural networks (GNNs) operates according to the message-passing paradigm, such that nodes systematically broadcast and listen to their neighbourhood. Yet, these synchronous computations have been deemed potentially sub-optimal as they could result in irrelevant information sent across the graph, thus interfering with efficient representation learning. In this work, we devise self-supervised loss functions biasing learning of synchronous GNN-based neural algorithmic reasoners towards representations that are invariant to asynchronous execution. Asynchrony invariance could successfully be learned, as revealed by analyses exploring the evolution of the self-supervised losses as well as their effect on the learned latent embeddings. Our approach to enforce asynchrony invariance constitutes a novel, potentially valuable tool for graph representation learning, which is increasingly prevalent in multiple real-world contexts.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/monteagudo-lago24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/monteagudo-lago24a/monteagudo-lago24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-monteagudo-lago24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Pablo
    family: Monteagudo-Lago
  - given: Arielle
    family: Rosinski
  - given: Andrew Joseph
    family: Dudzik
  - given: Petar
    family: Veličković
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 65-77
  id: monteagudo-lago24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 65
  lastpage: 77
  published: 2024-10-12 00:00:00 +0000
- title: 'A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry'
  abstract: 'Hyperspherical Prototypical Learning (HPL) is a supervised approach to representation learning that designs class prototypes on the unit hypersphere. The prototypes bias the representations to class separation in a scale invariant and known geometry. Previous approaches to HPL have either of the following shortcomings: (i) they follow an unprincipled optimisation procedure; or (ii) they are theoretically sound, but are constrained to only one possible latent dimension. In this paper, we address both shortcomings. To address (i), we present a principled optimisation procedure whose solution we show is optimal. To address (ii), we construct well-separated prototypes in a wide range of dimensions using linear block codes. Additionally, we give a full characterisation of the optimal prototype placement in terms of achievable and converse bounds, showing that our proposed methods are near-optimal.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/lindstrom24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/lindstrom24a/lindstrom24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-lindstrom24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Martin
    family: Lindström
  - given: Borja
    family: Rodríguez-Gálvez
  - given: Ragnar
    family: Thobaben
  - given: Mikael
    family: Skoglund
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 78-91
  id: lindstrom24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 78
  lastpage: 91
  published: 2024-10-12 00:00:00 +0000
- title: '3D Shape Completion with Test-Time Training'
  abstract: 'This work addresses the problem of <em> shape completion</em>, i.e., the task of restoring incomplete shapes by predicting their missing parts. While previous works have often predicted the fractured and restored shape in one step, we approach the task by separately predicting the fractured and newly restored parts, but ensuring these predictions are interconnected. We use a decoder network motivated by related work on the prediction of signed distance functions (DeepSDF). In particular, our representation allows us to consider <em> test-time-training</em>, i.e., finetuning network parameters to match the given incomplete shape more accurately during inference. While previous works often have difficulties with artifacts around the fracture boundary, we demonstrate that our overfitting to the fractured parts leads to significant improvements in the restoration of eight different shape categories of the ShapeNet data set in terms of their chamfer distances.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/schopf-kuester24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/schopf-kuester24a/schopf-kuester24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-schopf-kuester24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Michael
    family: Schopf-Kuester
  - given: Zorah
    family: Lähner
  - given: Michael
    family: Moeller
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 92-102
  id: schopf-kuester24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 92
  lastpage: 102
  published: 2024-10-12 00:00:00 +0000
- title: 'Commute-Time-Optimised Graphs for GNNs'
  abstract: 'We explore graph rewiring methods that optimise commute time. Recent graph rewiring approaches facilitate long-range interactions in sparse graphs, making such rewirings commute-time-optimal on average. However, when an expert prior exists on which node pairs should or should not interact, a superior rewiring would favour short commute times between these privileged node pairs. We construct two synthetic datasets with known priors reflecting realistic settings, and use these to motivate two bespoke rewiring methods that incorporate the known prior. We investigate the regimes where our rewiring improves test performance on the synthetic datasets. Finally, we perform a case study on a real-world citation graph to investigate the practical implications of our work.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/sterner24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/sterner24a/sterner24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-sterner24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Igor
    family: Sterner
  - given: Shiye
    family: Su
  - given: Petar
    family: Veličković
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 103-112
  id: sterner24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 103
  lastpage: 112
  published: 2024-10-12 00:00:00 +0000
- title: 'Self-supervised detection of perfect and partial input-dependent symmetries'
  abstract: 'Group equivariance can overly constrain models if the symmetries in the group differ from those observed in data. While common methods address this by determining the appropriate level of symmetry at the dataset level, they are limited to supervised settings and ignore scenarios in which multiple levels of symmetry co-exist in the same dataset. In this paper, we propose a method able to detect the level of symmetry of each input without the need for labels. Our framework is general enough to accommodate different families of both continuous and discrete symmetry distributions, such as arbitrary unimodal, symmetric distributions and discrete groups. We validate the effectiveness of our approach on synthetic datasets with different per-class levels of symmetries, and demonstrate practical applications such as the detection of out-of-distribution symmetries.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/urbano24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/urbano24a/urbano24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-urbano24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Alonso
    family: Urbano
  - given: David W.
    family: Romero
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 113-131
  id: urbano24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 113
  lastpage: 131
  published: 2024-10-12 00:00:00 +0000
- title: 'Metric Learning for Clifford Group Equivariant Neural Networks'
  abstract: 'Clifford Group Equivariant Neural Networks (CGENNs) leverage Clifford algebras and multivectors as an alternative approach to incorporating group equivariance to ensure symmetry constraints in neural representations. In principle, this formulation generalizes to orthogonal groups and preserves equivariance regardless of the metric signature. However, previous works have restricted internal network representations to Euclidean or Minkowski (pseudo-)metrics, handpicked depending on the problem at hand. In this work, we propose an alternative method that enables the metric to be learned in a data-driven fashion, allowing the CGENN network to learn more flexible representations. Specifically, we populate metric matrices fully, ensuring they are symmetric by construction, and leverage eigenvalue decomposition to integrate this additional learnable component into the original CGENN formulation in a principled manner. Additionally, we motivate our method using insights from category theory, which enables us to explain Clifford algebras as a categorical construction and guarantee the mathematical soundness of our approach. We validate our method in various tasks and showcase the advantages of learning more flexible latent metric representations. The code and data are available at \url{https://github.com/rick-ali/Metric-Learning-for-CGENNs}.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/ali24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/ali24a/ali24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-ali24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Riccardo
    family: Ali
  - given: Paulina
    family: Kulytė
  - given: Haitz
    prefix: Sáez de
    family: Ocáriz Borde
  - given: Pietro
    family: Lio
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 132-145
  id: ali24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 132
  lastpage: 145
  published: 2024-10-12 00:00:00 +0000
- title: 'Dirac–Bianconi Graph Neural Networks – Enabling Non-Diffusive Long-Range Graph Predictions'
  abstract: 'The geometry of a graph is encoded in dynamical processes on the graph. Many graph neural network (GNN) architectures are inspired by such dynamical systems, typically based on the graph Laplacian. Here, we introduce Dirac–Bianconi GNNs (DBGNNs), which are based on the topological Dirac equation recently proposed by Bianconi. Based on the graph Laplacian, we demonstrate that DBGNNs explore the geometry of the graph in a fundamentally different way than conventional message passing neural networks (MPNNs). While regular MPNNs propagate features diffusively, analogous to the heat equation, DBGNNs allow for coherent long-range propagation. Experimental results showcase the superior performance of DBGNNs over existing conventional MPNNs for long-range predictions of power grid stability and peptide properties. This study highlights the effectiveness of DBGNNs in capturing intricate graph dynamics, providing notable advancements in GNN architectures.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/nauck24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/nauck24a/nauck24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-nauck24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Christian
    family: Nauck
  - given: Rohan
    family: Gorantla
  - given: Michael
    family: Lindne
  - given: Konstantin
    family: Schurholt
  - given: Antonia S. J. S.
    family: Mey
  - given: Frank
    family: Hellmann
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 146-157
  id: nauck24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 146
  lastpage: 157
  published: 2024-10-12 00:00:00 +0000
- title: 'Leveraging Topological Guidance for Improved Knowledge Distillation'
  abstract: 'Deep learning has shown its efficacy in extracting useful features to solve various computer vision tasks. However, when the structure of the data is complex and noisy, capturing effective information to improve performance is very difficult. To this end, topological data analysis (TDA) has been utilized to derive useful representations that can contribute to improving performance and robustness against perturbations. Despite its effectiveness, the requirements for large computational resources and significant time consumption in extracting topological features through TDA are critical problems when implementing it on small devices. To address this issue, we propose a framework called Topological Guidance-based Knowledge Distillation (TGD), which uses topological features in knowledge distillation (KD) for image classification tasks. We utilize KD to train a superior lightweight model and provide topological features with multiple teachers simultaneously. We introduce a mechanism for integrating features from different teachers and reducing the knowledge gap between teachers and the student, which aids in improving performance. We demonstrate the effectiveness of our approach through diverse empirical evaluations.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/jeon24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/jeon24a/jeon24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-jeon24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Eun Som
    family: Jeon
  - given: Rahul
    family: Khurana
  - given: Aishani
    family: Pathak
  - given: Pavan
    family: Turaga
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 158-172
  id: jeon24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 158
  lastpage: 172
  published: 2024-10-12 00:00:00 +0000
- title: 'E(n) Equivariant Message Passing Cellular Networks'
  abstract: 'This paper introduces E(n) Equivariant Message Passing Cellular Networks (EMPCNs), an extension of E(n) Equivariant Graph Neural Networks to CW-complexes. Our approach addresses two aspects of geometric message passing networks: 1) enhancing their expressiveness by incorporating arbitrary cells, and 2) achieving this in a computationally efficient way with a decoupled EMPCNs technique. We demonstrate that EMPCNs achieve close to state-of-the-art performance on multiple tasks without the need for steerability, including many-body predictions and motion capture. Moreover, ablation studies confirm that decoupled EMPCNs exhibit stronger generalization capabilities than their non-topologically informed counterparts. These findings show that EMPCNs can be used as a scalable and expressive framework for higher-order message passing in geometric and topological graphs'
  volume: 251
  URL: https://proceedings.mlr.press/v251/kovac-24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/kovac-24a/kovac-24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-kovac-24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Veljko
    family: Kovac̆
  - given: Erik
    family: Bekkers
  - given: Pietro
    family: Lió
  - given: Floor
    family: Eijkelboom
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 173-186
  id: kovac-24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 173
  lastpage: 186
  published: 2024-10-12 00:00:00 +0000
- title: 'A Simple and Expressive Graph Neural Network Based Method for Structural Link Representation'
  abstract: 'Graph Neural Networks (GNNs) have achieved state-of-the-art results in tasks like node classification, link prediction, and graph classification. While much research has focused on their ability to distinguish graphs, fewer studies have addressed their capacity to differentiate links, a complex and less explored area. This paper introduces SLRGNN, a novel, theoretically grounded GNN-based method for link prediction. SLRGNN ensures that link representations are distinct if and only if the links have different structural roles within the graph. Our approach transforms the link prediction problem into a node classification problem on the corresponding line graph, enhancing expressiveness without sacrificing efficiency. Unlike existing methods, SLRGNN computes link probabilities in a single inference step, avoiding the need for individual subgraph constructions. We provide a formal proof of our method’s expressiveness and validate its superior performance through experiments on real-world datasets. The code is publicly available1'
  volume: 251
  URL: https://proceedings.mlr.press/v251/lachi24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/lachi24a/lachi24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-lachi24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Veronica
    family: Lachi
  - given: Francesco
    family: Ferrini
  - given: Antonio
    family: Longa
  - given: Bruno
    family: Lepri
  - given: Andrea
    family: Passerini
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 187-201
  id: lachi24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 187
  lastpage: 201
  published: 2024-10-12 00:00:00 +0000
- title: 'Invertible Temper Modeling using Normalizing Flows and the Effects of Structure Preserving Loss'
  abstract: 'Advanced manufacturing research and development is typically small-scale, owing to costly experiments associated with these novel processes. Deep learning techniques could help accelerate this development cycle but frequently struggle in small-data regimes like the advanced manufacturing space. While prior work has applied deep learning to modeling visually plausible advanced manufacturing microstructures, little work has been done on data-driven modeling of how microstructures are affected by heat treatment, or assessing the degree to which synthetic microstructures are able to support existing workflows. We propose to address this gap by using invertible neural networks (normalizing flows) to model the effects of heat treatment, e.g., tempering. The model is developed using scanning electron microscope imagery from samples produced using shear-assisted processing and extrusion (ShAPE) manufacturing. This approach not only produces visually and topologically plausible samples, but also captures information related to a sample’s material properties or experimental process parameters. We also demonstrate that topological data analysis, used in prior work to characterize microstructures, can also be used to stabilize model training, preserve structure, and improve downstream results. We assess directions for future work and identify our approach as an important step towards an end-to-end deep learning system for accelerating advanced manufacturing research and development.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/howland24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/howland24a/howland24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-howland24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Sylvia
    family: Howland
  - given: Keerti-Sahithi
    family: Kappagantula
  - given: Henry
    family: Kvinge
  - given: Tegan
    family: Emerson
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 202-211
  id: howland24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 202
  lastpage: 211
  published: 2024-10-12 00:00:00 +0000
- title: 'Topological and Dynamical Representations for Radio Frequency Signal Classification'
  abstract: 'Radio Frequency (RF) signals are found throughout our world, carrying over-the-air information for both digital and analog uses with applications ranging from WiFi to the radio. One area of focus in RF signal analysis is determining the modulation schemes employed in these signals which is crucial in many RF signal processing domains from secure communication to spectrum monitoring. This work investigates the accuracy and noise robustness of novel Topological Data Analysis (TDA) and dynamic representation based approaches paired with a small convolution neural network for RF signal modulation classification with a comparison to state-of-the-art deep neural network approaches. We show that using TDA tools, like Vietoris-Rips and lower star filtration, and the Takens’ embedding in conjunction with a standard shallow neural network we can capture the intrinsic dynamical, geometric, and topological features of the underlying signal’s manifold, informative representations of the RF signals. Our approach is effective in handling the modulation classification task and is notably noise robust, outperforming the commonly used deep neural network approaches in mode classification. Moreover, our fusion of dynamical and topological information is able to attain similar performance to deep neural network architectures with significantly smaller training datasets.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/meyers24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/meyers24a/meyers24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-meyers24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Audum
    family: Meyers
  - given: Timothy
    family: Doster
  - given: Colin
    family: Olson
  - given: Tegan
    family: Emerson
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 212-221
  id: meyers24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 212
  lastpage: 221
  published: 2024-10-12 00:00:00 +0000
- title: 'SCENE-Net V2: Interpretable Multiclass 3D Scene Understanding with Geometric Priors'
  abstract: 'In this paper, we present SCENE-Net V2, a new resource-efficient, gray-box model for multiclass 3D scene understanding. SCENE-Net V2 leverages Group Equivariant Non-Expansive Operators (GENEOs) to incorporate fundamental geometric priors as inductive biases, offering a more transparent alternative to the prevalent black-box models in the domain. This model addresses the limitations of its white-box predecessor, SCENE-Net, by expanding its applicability from pole-like structures to a wider range of datasets with detailed 3D elements. Our model achieves the sweet-spot between application and transparency: SCENE-Net V2 is a general method for object identification with interpretability guarantees. Our experimental results demonstrate that SCENE-Net V2 achieves competitive performance with a significantly lower parameter count. Furthermore, we propose the use of GENEO-based architectures as a feature extraction tool for black-box models, enabling an increase in performance by adding a minimal number of meaningful parameters. Our code is available in: https://github.com/dlavado/SCENE-Net-V2.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/lavado24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/lavado24a/lavado24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-lavado24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Diogo
    family: Lavado
  - given: Cláudia
    family: Soares
  - given: Alessandra
    family: Micheletti
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 222-232
  id: lavado24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 222
  lastpage: 232
  published: 2024-10-12 00:00:00 +0000
- title: 'Variational Inference Failures Under Model Symmetries: Permutation Invariant Posteriors for Bayesian Neural Networks'
  abstract: 'Weight space symmetries in neural network architectures, such as permutation symmetries in MLPs, give rise to Bayesian neural network (BNN) posteriors with many equivalent modes. This multimodality poses a challenge for variational inference (VI) techniques, which typically rely on approximating the posterior with a unimodal distribution. In this work, we investigate the impact of weight space permutation symmetries on VI. We demonstrate, both theoretically and empirically, that these symmetries lead to biases in the approximate posterior, which degrade predictive performance and posterior fit if not explicitly accounted for. To mitigate this behavior, we leverage the symmetric structure of the posterior and devise a symmetrization mechanism for constructing permutation invariant variational posteriors. We show that the symmetrized distribution has a strictly better fit to the true posterior, and that it can be trained using the original ELBO objective with a modified KL regularization term. We demonstrate experimentally that our approach mitigates the aforementioned biases and results in improved predictions and a higher ELBO.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/gelberg24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/gelberg24a/gelberg24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-gelberg24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Yoav
    family: Gelberg
  - given: Tycho F. A.
    prefix: van der
    family: Ouderaa
  - given: Mark
    prefix: van der
    family: Wilk
  - given: Yarin
    family: Gal
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 233-248
  id: gelberg24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 233
  lastpage: 248
  published: 2024-10-12 00:00:00 +0000
- title: 'Joint Diffusion Processes as an Inductive Bias in Sheaf Neural Networks'
  abstract: 'Sheaf Neural Networks (SNNs) naturally extend Graph Neural Networks (GNNs) by endowing a cellular sheaf over the graph, equipping nodes and edges with vector spaces and defining linear mappings between them. While the attached geometric structure has proven to be useful in analyzing heterophily and oversmoothing, so far the methods by which the sheaf is computed do not always guarantee a good performance in such settings. In this work, drawing inspiration from opinion dynamics concepts, we propose two novel sheaf learning approaches that (i) provide a more intuitive understanding of the involved structure maps, (ii) introduce a useful inductive bias for heterophily and oversmoothing, and (iii) infer the sheaf in a way that does not scale with the number of features, thus using fewer learnable parameters than existing methods. In our evaluation, we show the limitations of the real-world benchmarks used so far on SNNs, and design a new synthetic task –leveraging the symmetries of $n$-dimensional ellipsoids– that enables us to better assess the strengths and weaknesses of sheaf-based models.  Our extensive experimentation on these novel datasets reveals valuable insights into the scenarios and contexts where basic SNNs and our proposed approaches can be beneficial.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/hernandez-caralt24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/hernandez-caralt24a/hernandez-caralt24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-hernandez-caralt24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Ferran
    family: Hernandez Caralt
  - given: Guillermo
    family: Bernárdez Gil
  - given: Iulia
    family: Duta
  - given: Pietro
    family: Liò
  - given: Eduard
    family: Alarcón Cot
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 249-263
  id: hernandez-caralt24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 249
  lastpage: 263
  published: 2024-10-12 00:00:00 +0000
- title: 'Sheaf Diffusion Goes Nonlinear: Enhancing GNNs with Adaptive Sheaf Laplacians'
  abstract: 'Sheaf Neural Networks (SNNs) have recently been introduced to enhance Graph Neural Networks (GNNs) in their capability to learn from graphs. Previous studies either focus on linear sheaf Laplacians or hand-crafted nonlinear sheaf Laplacians. The former are not always expressive enough in modeling complex interactions between nodes, such as antagonistic dynamics and bounded confidence dynamics, while the latter use a fixed nonlinear function that is not adapted to the data at hand. To enhance the capability of SNNs to capture complex node-to-node interactions while adapting to different scenarios, we propose a Nonlinear Sheaf Diffusion (NLSD) model, which incorporates nonlinearity into the Laplacian of SNNs through a general function learned from data. Our model is validated on a synthetic community detection dataset, where it outperforms linear SNNs and common GNN baselines in a node classification task, showcasing its ability to leverage complex network dynamics.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/zaghen24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/zaghen24a/zaghen24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-zaghen24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Olga
    family: Zaghen
  - given: Antonio
    family: Longa
  - given: Steve
    family: Azzolin
  - given: Lev
    family: Telyatnikov
  - given: Andrea
    family: Passerini
  - given: Pietro
    family: Liò
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 264-276
  id: zaghen24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 264
  lastpage: 276
  published: 2024-10-12 00:00:00 +0000
- title: 'Decoder ensembling for learned latent geometries'
  abstract: 'Latent space geometry provides a rigorous and empirically valuable framework for interacting with the latent variables of deep generative models. This approach reinterprets Euclidean latent spaces as Riemannian through a pull-back metric, allowing for a standard differential geometric analysis of the latent space. Unfortunately, data manifolds are generally compact and easily disconnected or filled with holes, suggesting a topological mismatch to the Euclidean latent space. The most established solution to this mismatch is to let uncertainty be a proxy for topology, but in neural network models, this is often realized through crude heuristics that lack principle and generally do not scale to high-dimensional representations. We propose using ensembles of decoders to capture model uncertainty and show how to easily compute geodesics on the associated expected manifold. Empirically, we find this simple and reliable, thereby coming one step closer to easy-to-use latent geometries.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/syrota24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/syrota24a/syrota24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-syrota24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Stas
    family: Syrota
  - given: Pablo
    family: Moreno-Muñoz
  - given: Søren
    family: Hauberg
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 277-285
  id: syrota24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 277
  lastpage: 285
  published: 2024-10-12 00:00:00 +0000
- title: 'The NGT200 Dataset: Geometric Multi-View Isolated Sign Recognition'
  abstract: 'Sign Language Processing (SLP) provides a foundation for a more inclusive future in language technology; however, the field faces several significant challenges that must be addressed to achieve practical, real-world applications. This work addresses multi-view isolated sign recognition (MV-ISR), and highlights the essential role of 3D awareness and geometry in SLP systems.  We introduce the NGT200 dataset, a novel spatio-temporal multi-view benchmark, establishing MV-ISR as distinct from single-view ISR (SV-ISR). We demonstrate the benefits of synthetic data and propose conditioning sign representations on spatial symmetries inherent in sign language. Leveraging an SE(2) equivariant model improves MV-ISR performance by 8-22 percent  over the baseline.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/ranum24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/ranum24a/ranum24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-ranum24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Oline
    family: Ranum
  - given: David R.
    family: Wessels
  - given: Gomer
    family: Otterspeer
  - given: Erik J.
    family: Bekkers
  - given: Floris
    family: Roelofsen
  - given: Jari I.
    family: Andersen
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 286-302
  id: ranum24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 286
  lastpage: 302
  published: 2024-10-12 00:00:00 +0000
- title: 'On Fairly Comparing Group Equivariant Networks'
  abstract: 'This paper investigates the flexibility of Group Equivariant Convolutional Neural Networks (G-CNNs), which specialize conventional neural networks by encoding equivariance to group transformations. Inspired by splines, we propose new metrics to assess the complexity of ReLU networks and use them to quantify and compare the flexibility of networks equivariant to different groups. Our analysis suggests that the current practice of comparing networks by fixing the number of trainable parameters unfairly affords models equivariant to larger groups additional expressivity. Instead, we advocate for comparisons based on a fixed computational budget—which we empirically show results in more similar levels of network flexibility. This approach allows one to better disentangle the impact of constraining networks to be equivariant from the increased expressivity they are typically granted in the literature, enabling one to obtain a more nuanced view of the impact of enforcing equivariance. Interestingly, our experiments indicate that enforcing equivariance results in <em>more</em> complex fitted functions even when controlling for compute, despite reducing network expressivity.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/roos24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/roos24a/roos24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-roos24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Lucas
    family: Roos
  - given: Steve
    family: Kroon
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 303-317
  id: roos24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 303
  lastpage: 317
  published: 2024-10-12 00:00:00 +0000
- title: 'Graph Convolutional Networks for Learning Laplace-Beltrami Operators'
  abstract: 'Recovering a high-level representation of geometric data is a fundamental goal in geometric modeling and computer graphics. In this paper, we introduce a data-driven approach to computing the spectrum of the Laplace-Beltrami operator of triangle meshes using graph convolutional networks. Specifically, we train graph convolutional networks on a large-scale dataset of synthetically generated triangle meshes, encoded with geometric data consisting of Voronoi areas, normalized edge lengths, and the Gauss map, to infer eigenvalues of 3D shapes. We attempt to address the ability of graph neural networks to capture global shape descriptors–including spectral information–that were previously inaccessible using existing methods from computer vision, and our paper exhibits promising signals suggesting that Laplace-Beltrami eigenvalues on discrete surfaces can be learned. Additionally, we perform ablation studies showing the addition of geometric data leads to improved accuracy.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/wu24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/wu24a/wu24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-wu24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Yingying
    family: Wu
  - given: Roger
    family: Fu
  - given: Yang
    family: Peng
  - given: Qifeng
    family: Chen
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 318-331
  id: wu24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 318
  lastpage: 331
  published: 2024-10-12 00:00:00 +0000
- title: 'The Geometry of Diffusion Models: Tubular Neighbourhoods and Singularities'
  abstract: 'Diffusion generative models have been a leading approach for generating high-dimensional data. The current research aims to investigate the relation between the dynamics of diffusion models and the tubular neighbourhoods of a data manifold. We propose an algorithm to estimate the injectivity radius, the supremum of radii of tubular neighbourhoods. Our research relates geometric objects such as curvatures of data manifolds and dimensions of ambient spaces, to singularities of the generative dynamics such as emergent critical phenomena or spontaneous symmetry breaking.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/sakamoto24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/sakamoto24a/sakamoto24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-sakamoto24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Kotaro
    family: Sakamoto
  - given: Ryosuke
    family: Sakamoto
  - given: Masato
    family: Tanabe
  - given: Masatomo
    family: Akagawa
  - given: Yusuke
    family: Hayashi
  - given: Manato
    family: Yaguchi
  - given: Masahiro
    family: Suzuki
  - given: Yutaka
    family: Matsuo
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 332-363
  id: sakamoto24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 332
  lastpage: 363
  published: 2024-10-12 00:00:00 +0000
- title: 'Equivariant vs. Invariant Layers: A Comparison of Backbone and Pooling for Point Cloud Classification'
  abstract: 'Learning from set-structured data, such as point clouds, has gained significant attention from the machine learning community. Geometric deep learning provides a blueprint for designing effective set neural networks that preserve the permutation symmetry of set-structured data. Of our interest are permutation invariant networks, which are composed of a permutation equivariant backbone, permutation invariant global pooling, and regression/classification head. While existing literature has focused on improving equivariant backbones, the impact of the pooling layer is often overlooked. In this paper, we examine the interplay between permutation equivariant backbones and permutation invariant global pooling on three benchmark point cloud classification datasets. Our findings reveal that: 1) complex pooling methods, such as transport-based or attention-based poolings, can significantly boost the performance of simple backbones, but the benefits diminish for more complex backbones, 2) even complex backbones can benefit from pooling layers in low data scenarios, 3) surprisingly, the choice of pooling layers can have a more significant impact on the model’s performance than adjusting the width and depth of the backbone, and 4) pairwise combination of pooling layers can significantly improve the performance of a fixed backbone. Our comprehensive study provides insights for practitioners to design better permutation invariant set neural networks. Our code is available at https://github.com/mint-vu/backbone_vs_pooling.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/kothapalli24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/kothapalli24a/kothapalli24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-kothapalli24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Abihith
    family: Kothapalli
  - given: Ashkan
    family: Shahbazi
  - given: Xinran
    family: Liu
  - given: Robert
    family: Sheng
  - given: Soheil
    family: Kolouri
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 364-380
  id: kothapalli24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 364
  lastpage: 380
  published: 2024-10-12 00:00:00 +0000
- title: 'Strongly Isomorphic Neural Optimal Transport Across Incomparable Spaces'
  abstract: 'Optimal Transport (OT) has recently emerged as a powerful framework for learning minimal-displacement maps between distributions. The predominant approach involves a neural parametrization of the Monge formulation of OT, typically assuming the same space for both distributions. However, the setting across “incomparable spaces” (e.g., of different dimensionality), corresponding to the Gromov-Wasserstein distance, remains underexplored, with existing methods often imposing restrictive assumptions on the cost function. In this paper, we present a novel neural formulation of the Gromov-Monge (GM) problem rooted in one of its fundamental properties: invariance to strong isomorphisms. We operationalize this property by decomposing the learnable OT map into two components: (i) an approximate strong isomorphism between the source distribution and an intermediate reference distribution, and (ii) a GM-optimal map between this reference and the target distribution. Our formulation leverages and extends the Monge gap regularizer of \citet{gap_monge} to eliminate the need for complex architectural requirements of other neural OT methods, yielding a simple but practical method that enjoys favorable theoretical guarantees. Our preliminary empirical results show that our framework provides a promising approach to learn OT maps across diverse spaces.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/sotiropoulou24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/sotiropoulou24a/sotiropoulou24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-sotiropoulou24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Athina
    family: Sotiropoulou
  - given: David
    family: Alvarez-Melis
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 381-393
  id: sotiropoulou24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 381
  lastpage: 393
  published: 2024-10-12 00:00:00 +0000
- title: 'Adaptive Sampling for Continuous Group Equivariant Neural Networks'
  abstract: 'Steerable networks, which process data with intrinsic symmetries, often use Fourier-based non-linearities that require sampling from the entire group, leading to a need for discretization in continuous groups. As the number of samples increases, both performance and equivariance improve, yet this also leads to higher computational costs. To address this, we introduce an adaptive sampling approach that dynamically adjusts the sampling process to the symmetries in the data, reducing the number of required group samples and lowering the computational demands. We explore various implementations and their effects on model performance, equivariance, and computational efficiency. Our findings demonstrate improved model performance, and a marginal increase in memory efficiency'
  volume: 251
  URL: https://proceedings.mlr.press/v251/inal24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/inal24a/inal24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-inal24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Berfin
    family: Inal
  - given: Gabriele
    family: Cesa
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 394-419
  id: inal24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 394
  lastpage: 419
  published: 2024-10-12 00:00:00 +0000
- title: 'ICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain'
  abstract: 'This paper describes the 2nd edition of the ICML Topological Deep Learning Challenge that was hosted within the ICML 2024 ELLIS Workshop on Geometry-grounded Representation Learning and Generative Modeling (GRaM). The challenge focused on the problem of representing data in different discrete topological domains in order to bridge the gap between Topological Deep Learning (TDL) and other types of structured datasets (e.g. point clouds, graphs). Specifically, participants were asked to design and implement topological liftings, i.e. mappings between different data structures and topological domains –like hypergraphs, or simplicial/cell/combinatorial complexes. The challenge received 52 submissions satisfying all the requirements. This paper introduces the main scope of the challenge, and summarizes the main results and findings.'
  volume: 251
  URL: https://proceedings.mlr.press/v251/bernardez24a.html
  PDF: https://raw.githubusercontent.com/mlresearch/v251/main/assets/bernardez24a/bernardez24a.pdf
  edit: https://github.com/mlresearch//v251/edit/gh-pages/_posts/2024-10-12-bernardez24a.md
  series: 'Proceedings of Machine Learning Research'
  container-title: 'Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM)'
  publisher: 'PMLR'
  author: 
  - given: Guillermo
    family: Bernárdez
  - given: Lev
    family: Telyatnikov
  - given: Marco
    family: Montagna
  - given: Federica
    family: Baccini
  - given: Mathilde
    family: Papillon
  - given: Miquel
    family: Ferriol-Galmés
  - given: Mustafa
    family: Hajij
  - given: Theodore
    family: Papamarkou
  - given: Maria Sofia
    family: Bucarelli
  - given: Olga
    family: Zaghen
  - given: Johan
    family: Mathe
  - given: Audun
    family: Myers
  - given: Scott
    family: Mahan
  - given: Hansen
    family: Lillemark
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Tim
    family: Doster
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Katrina
    family: Agate
  - given: Nesreen K
    family: Ahmed
  - given: Pengfei
    family: Bai
  - given: Michael
    family: Banf
  - given: Claudio
    family: Battiloro
  - given: Maxim
    family: Beketov
  - given: Paul
    family: Bogdan
  - given: Martin
    family: Carrasco
  - given: Andrea
    family: Cavallo
  - given: Yun Young
    family: Choi
  - given: George
    family: Dasoulas
  - given: Matous̆
    family: Elphick
  - given: Giordan
    family: Escalona
  - given: Dominik
    family: Filipiak
  - given: Halley
    family: Fritze
  - given: Thomas
    family: Gebhart
  - given: Manel
    family: Gil-Sorribes
  - given: Salvish
    family: Goomanee
  - given: Victor
    family: Guallar
  - given: Liliya
    family: Imasheva
  - given: Andrei
    family: Irimia
  - given: Hongwei
    family: Jin
  - given: Graham
    family: Johnson
  - given: Nikos
    family: Kanakaris
  - given: Boshko
    family: Koloski
  - given: Veljko
    family: Kovac̆
  - given: Manuel
    family: Lecha
  - given: Minho
    family: Lee
  - given: Pierrick
    family: Leroy
  - given: Theodore
    family: Long
  - given: German
    family: Magai
  - given: Alvaro
    family: Martinez
  - given: Marissa
    family: Masden
  - given: Sebastian
    family: Mez̆nar
  - given: Bertran
    family: Miquel-Oliver
  - given: Alexis
    family: Molina
  - given: Alexander
    family: Nikitin
  - given: Marco
    family: Nurisso
  - given: Matt
    family: Piekenbrock
  - given: Yu
    family: Qin
  - given: Patryk
    family: Rygiel
  - given: Alessandro
    family: Salatiello
  - given: Max
    family: Schattauer
  - given: Pavel
    family: Snopov
  - given: Julian
    family: Suk
  - given: Valentina
    family: Sánchez
  - given: Mauricio
    family: Tec
  - given: Francesco
    family: Vaccarino
  - given: Jonas
    family: Verhellen
  - given: Frederic
    family: Wantiez
  - given: Alexander
    family: Weers
  - given: Patrik
    family: Zajec
  - given: Blaz̆
    family: S̆krlj
  - given: Nina
    family: Miolane
  editor: 
  - given: Sharvaree
    family: Vadgama
  - given: Erik
    family: Bekkers
  - given: Alison
    family: Pouplin
  - given: Sekou-Oumar
    family: Kaba
  - given: Robin
    family: Walters
  - given: Hannah
    family: Lawrence
  - given: Tegan
    family: Emerson
  - given: Henry
    family: Kvinge
  - given: Jakub
    family: Tomczak
  - given: Stephanie
    family: Jegelka
  page: 420-428
  id: bernardez24a
  issued:
    date-parts: 
      - 2024
      - 10
      - 12
  firstpage: 420
  lastpage: 428
  published: 2024-10-12 00:00:00 +0000
