Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation

Qianru Zhang, Chao Huang, Lianghao Xia, Zheng Wang, Siu Ming Yiu, Ruihua Han
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:41151-41163, 2023.

Abstract

Spatial-temporal graph learning has emerged as the state-of-the-art solution for modeling structured spatial-temporal data in learning region representations for various urban sensing tasks (e.g., crime forecasting, traffic flow prediction). However, most existing models are vulnerable to the quality of the generated region graph due to the inartistic graph-structured information aggregation schema. The ubiquitous spatial-temporal data noise and incompleteness in real-life scenarios bring difficulties to generate high-quality region representations. In this paper, we propose a Spatial-Temporal Adversarial Graph contrastive learning model (STAG) to tackle this challenge for adaptive self-supervised graph augmentation. Specifically, we propose a learnable contrastive learning function that enables the automated distillation of important multi-view self-supervised signals for adaptive spatial-temporal graph augmentation. To enhance the representation discrimination ability and robustness, the designed adversarial contrastive learning mechanism empowers STAG to adaptively identify hard samples for better self-supervision. Finally, a cross-view contrastive learning paradigm is introduced to model the inter-dependencies across view-specific region representations and preserve the underlying relation heterogeneity. We verify the superiority of our STAG method in various spatial-temporal prediction tasks on several benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-zhang23p, title = {Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation}, author = {Zhang, Qianru and Huang, Chao and Xia, Lianghao and Wang, Zheng and Yiu, Siu Ming and Han, Ruihua}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {41151--41163}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/zhang23p/zhang23p.pdf}, url = {https://proceedings.mlr.press/v202/zhang23p.html}, abstract = {Spatial-temporal graph learning has emerged as the state-of-the-art solution for modeling structured spatial-temporal data in learning region representations for various urban sensing tasks (e.g., crime forecasting, traffic flow prediction). However, most existing models are vulnerable to the quality of the generated region graph due to the inartistic graph-structured information aggregation schema. The ubiquitous spatial-temporal data noise and incompleteness in real-life scenarios bring difficulties to generate high-quality region representations. In this paper, we propose a Spatial-Temporal Adversarial Graph contrastive learning model (STAG) to tackle this challenge for adaptive self-supervised graph augmentation. Specifically, we propose a learnable contrastive learning function that enables the automated distillation of important multi-view self-supervised signals for adaptive spatial-temporal graph augmentation. To enhance the representation discrimination ability and robustness, the designed adversarial contrastive learning mechanism empowers STAG to adaptively identify hard samples for better self-supervision. Finally, a cross-view contrastive learning paradigm is introduced to model the inter-dependencies across view-specific region representations and preserve the underlying relation heterogeneity. We verify the superiority of our STAG method in various spatial-temporal prediction tasks on several benchmark datasets.} }
Endnote
%0 Conference Paper %T Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation %A Qianru Zhang %A Chao Huang %A Lianghao Xia %A Zheng Wang %A Siu Ming Yiu %A Ruihua Han %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-zhang23p %I PMLR %P 41151--41163 %U https://proceedings.mlr.press/v202/zhang23p.html %V 202 %X Spatial-temporal graph learning has emerged as the state-of-the-art solution for modeling structured spatial-temporal data in learning region representations for various urban sensing tasks (e.g., crime forecasting, traffic flow prediction). However, most existing models are vulnerable to the quality of the generated region graph due to the inartistic graph-structured information aggregation schema. The ubiquitous spatial-temporal data noise and incompleteness in real-life scenarios bring difficulties to generate high-quality region representations. In this paper, we propose a Spatial-Temporal Adversarial Graph contrastive learning model (STAG) to tackle this challenge for adaptive self-supervised graph augmentation. Specifically, we propose a learnable contrastive learning function that enables the automated distillation of important multi-view self-supervised signals for adaptive spatial-temporal graph augmentation. To enhance the representation discrimination ability and robustness, the designed adversarial contrastive learning mechanism empowers STAG to adaptively identify hard samples for better self-supervision. Finally, a cross-view contrastive learning paradigm is introduced to model the inter-dependencies across view-specific region representations and preserve the underlying relation heterogeneity. We verify the superiority of our STAG method in various spatial-temporal prediction tasks on several benchmark datasets.
APA
Zhang, Q., Huang, C., Xia, L., Wang, Z., Yiu, S.M. & Han, R.. (2023). Spatial-Temporal Graph Learning with Adversarial Contrastive Adaptation. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:41151-41163 Available from https://proceedings.mlr.press/v202/zhang23p.html.

Related Material