Pruning Edges and Gradients to Learn Hypergraphs From Larger Sets

David W Zhang, Gertjan J. Burghouts, Cees G. M. Snoek
Proceedings of the First Learning on Graphs Conference, PMLR 198:53:1-53:18, 2022.

Abstract

This paper aims for set-to-hypergraph prediction, where the goal is to infer the set of relations for a given set of entities. This is a common abstraction for applications in particle physics, biological systems and combinatorial optimization. We address two common scaling problems encountered in set-to-hypergraph tasks that limit the size of the input set: the exponentially growing number of hyperedges and the run-time complexity, both leading to higher memory requirements. We make three contributions. First, we propose to predict and supervise the positive edges only, which changes the asymptotic memory scaling from exponential to linear. Second, we introduce a training method that encourages iterative refinement of the predicted hypergraph, which allows us to skip iterations in the backward pass for improved efficiency and constant memory usage. Third, we combine both contributions in a single set-to-hypergraph model that enables us to address problems with larger input set sizes. We provide ablations for our main technical contributions and show that our model outperforms prior state-of-the-art, especially for larger sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-zhang22a, title = {Pruning Edges and Gradients to Learn Hypergraphs From Larger Sets}, author = {Zhang, David W and Burghouts, Gertjan J. and Snoek, Cees G. M.}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {53:1--53:18}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/zhang22a/zhang22a.pdf}, url = {https://proceedings.mlr.press/v198/zhang22a.html}, abstract = {This paper aims for set-to-hypergraph prediction, where the goal is to infer the set of relations for a given set of entities. This is a common abstraction for applications in particle physics, biological systems and combinatorial optimization. We address two common scaling problems encountered in set-to-hypergraph tasks that limit the size of the input set: the exponentially growing number of hyperedges and the run-time complexity, both leading to higher memory requirements. We make three contributions. First, we propose to predict and supervise the positive edges only, which changes the asymptotic memory scaling from exponential to linear. Second, we introduce a training method that encourages iterative refinement of the predicted hypergraph, which allows us to skip iterations in the backward pass for improved efficiency and constant memory usage. Third, we combine both contributions in a single set-to-hypergraph model that enables us to address problems with larger input set sizes. We provide ablations for our main technical contributions and show that our model outperforms prior state-of-the-art, especially for larger sets.} }
Endnote
%0 Conference Paper %T Pruning Edges and Gradients to Learn Hypergraphs From Larger Sets %A David W Zhang %A Gertjan J. Burghouts %A Cees G. M. Snoek %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-zhang22a %I PMLR %P 53:1--53:18 %U https://proceedings.mlr.press/v198/zhang22a.html %V 198 %X This paper aims for set-to-hypergraph prediction, where the goal is to infer the set of relations for a given set of entities. This is a common abstraction for applications in particle physics, biological systems and combinatorial optimization. We address two common scaling problems encountered in set-to-hypergraph tasks that limit the size of the input set: the exponentially growing number of hyperedges and the run-time complexity, both leading to higher memory requirements. We make three contributions. First, we propose to predict and supervise the positive edges only, which changes the asymptotic memory scaling from exponential to linear. Second, we introduce a training method that encourages iterative refinement of the predicted hypergraph, which allows us to skip iterations in the backward pass for improved efficiency and constant memory usage. Third, we combine both contributions in a single set-to-hypergraph model that enables us to address problems with larger input set sizes. We provide ablations for our main technical contributions and show that our model outperforms prior state-of-the-art, especially for larger sets.
APA
Zhang, D.W., Burghouts, G.J. & Snoek, C.G.M.. (2022). Pruning Edges and Gradients to Learn Hypergraphs From Larger Sets. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:53:1-53:18 Available from https://proceedings.mlr.press/v198/zhang22a.html.

Related Material