Implications of sparsity and high triangle density for graph representation learning

Hannah Sansford, Alexander Modell, Nick Whiteley, Patrick Rubin-Delanchy
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5449-5473, 2023.

Abstract

Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes, in which link probabilities are inner products. Here, we show that such graphs can be reproduced using an infinite-dimensional inner product model, where the node representations lie on a low-dimensional manifold. Recovering a global representation of the manifold is impossible in a sparse regime. However, we can zoom in on local neighbourhoods, where a lower-dimensional representation is possible. As our constructions allow the points to be uniformly distributed on the manifold, we find evidence against the common perception that triangles imply community structure.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-sansford23a, title = {Implications of sparsity and high triangle density for graph representation learning}, author = {Sansford, Hannah and Modell, Alexander and Whiteley, Nick and Rubin-Delanchy, Patrick}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {5449--5473}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/sansford23a/sansford23a.pdf}, url = {https://proceedings.mlr.press/v206/sansford23a.html}, abstract = {Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes, in which link probabilities are inner products. Here, we show that such graphs can be reproduced using an infinite-dimensional inner product model, where the node representations lie on a low-dimensional manifold. Recovering a global representation of the manifold is impossible in a sparse regime. However, we can zoom in on local neighbourhoods, where a lower-dimensional representation is possible. As our constructions allow the points to be uniformly distributed on the manifold, we find evidence against the common perception that triangles imply community structure.} }
Endnote
%0 Conference Paper %T Implications of sparsity and high triangle density for graph representation learning %A Hannah Sansford %A Alexander Modell %A Nick Whiteley %A Patrick Rubin-Delanchy %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-sansford23a %I PMLR %P 5449--5473 %U https://proceedings.mlr.press/v206/sansford23a.html %V 206 %X Recent work has shown that sparse graphs containing many triangles cannot be reproduced using a finite-dimensional representation of the nodes, in which link probabilities are inner products. Here, we show that such graphs can be reproduced using an infinite-dimensional inner product model, where the node representations lie on a low-dimensional manifold. Recovering a global representation of the manifold is impossible in a sparse regime. However, we can zoom in on local neighbourhoods, where a lower-dimensional representation is possible. As our constructions allow the points to be uniformly distributed on the manifold, we find evidence against the common perception that triangles imply community structure.
APA
Sansford, H., Modell, A., Whiteley, N. & Rubin-Delanchy, P.. (2023). Implications of sparsity and high triangle density for graph representation learning. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:5449-5473 Available from https://proceedings.mlr.press/v206/sansford23a.html.

Related Material