Featured Graph Coarsening with Similarity Guarantees

Manoj Kumar, Anurag Sharma, Shashwat Saxena, Sandeep Kumar
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:17953-17975, 2023.

Abstract

Graph coarsening is a dimensionality reduction technique that aims to learn a smaller-tractable graph while preserving the properties of the original input graph. However, many real-world graphs also have features or contexts associated with each node. The existing graph coarsening methods do not consider the node features and rely solely on a graph matrix(e.g., adjacency and Laplacian) to coarsen graphs. However, some recent deep learning-based graph coarsening methods are designed for specific tasks considering both node features and graph matrix. In this paper, we introduce a novel optimization-based framework for graph coarsening that takes both the graph matrix and the node features as the input and jointly learns the coarsened graph matrix and the coarsened feature matrix while ensuring desired properties. To the best of our knowledge, this is the first work that guarantees that the learned coarsened graph is $\epsilon\in[0,1)$ similar to the original graph. Extensive experiments with both real and synthetic benchmark datasets elucidate the proposed framework’s efficacy and applicability for numerous graph-based applications, including graph clustering, node classification, stochastic block model identification, and graph summarization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kumar23a, title = {Featured Graph Coarsening with Similarity Guarantees}, author = {Kumar, Manoj and Sharma, Anurag and Saxena, Shashwat and Kumar, Sandeep}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {17953--17975}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kumar23a/kumar23a.pdf}, url = {https://proceedings.mlr.press/v202/kumar23a.html}, abstract = {Graph coarsening is a dimensionality reduction technique that aims to learn a smaller-tractable graph while preserving the properties of the original input graph. However, many real-world graphs also have features or contexts associated with each node. The existing graph coarsening methods do not consider the node features and rely solely on a graph matrix(e.g., adjacency and Laplacian) to coarsen graphs. However, some recent deep learning-based graph coarsening methods are designed for specific tasks considering both node features and graph matrix. In this paper, we introduce a novel optimization-based framework for graph coarsening that takes both the graph matrix and the node features as the input and jointly learns the coarsened graph matrix and the coarsened feature matrix while ensuring desired properties. To the best of our knowledge, this is the first work that guarantees that the learned coarsened graph is $\epsilon\in[0,1)$ similar to the original graph. Extensive experiments with both real and synthetic benchmark datasets elucidate the proposed framework’s efficacy and applicability for numerous graph-based applications, including graph clustering, node classification, stochastic block model identification, and graph summarization.} }
Endnote
%0 Conference Paper %T Featured Graph Coarsening with Similarity Guarantees %A Manoj Kumar %A Anurag Sharma %A Shashwat Saxena %A Sandeep Kumar %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kumar23a %I PMLR %P 17953--17975 %U https://proceedings.mlr.press/v202/kumar23a.html %V 202 %X Graph coarsening is a dimensionality reduction technique that aims to learn a smaller-tractable graph while preserving the properties of the original input graph. However, many real-world graphs also have features or contexts associated with each node. The existing graph coarsening methods do not consider the node features and rely solely on a graph matrix(e.g., adjacency and Laplacian) to coarsen graphs. However, some recent deep learning-based graph coarsening methods are designed for specific tasks considering both node features and graph matrix. In this paper, we introduce a novel optimization-based framework for graph coarsening that takes both the graph matrix and the node features as the input and jointly learns the coarsened graph matrix and the coarsened feature matrix while ensuring desired properties. To the best of our knowledge, this is the first work that guarantees that the learned coarsened graph is $\epsilon\in[0,1)$ similar to the original graph. Extensive experiments with both real and synthetic benchmark datasets elucidate the proposed framework’s efficacy and applicability for numerous graph-based applications, including graph clustering, node classification, stochastic block model identification, and graph summarization.
APA
Kumar, M., Sharma, A., Saxena, S. & Kumar, S.. (2023). Featured Graph Coarsening with Similarity Guarantees. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:17953-17975 Available from https://proceedings.mlr.press/v202/kumar23a.html.

Related Material