Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG

Xinxu Wei, Kanhao Zhao, Yong Jiao, Hua Xie, Lifang He, Yu Zhang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:66358-66377, 2025.

Abstract

Effectively utilizing extensive unlabeled high-density EEG data to improve performance in scenarios with limited labeled low-density EEG data presents a significant challenge. In this paper, we address this challenge by formulating it as a graph transfer learning and knowledge distillation problem. We propose a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, named EEG-DisGCMAE, to bridge the gap between unlabeled and labeled as well as high- and low-density EEG data. Our approach introduces a novel unified graph self-supervised pre-training paradigm, which seamlessly integrates the graph contrastive pre-training with the graph masked autoencoder pre-training. Furthermore, we propose a graph topology distillation loss function, allowing a lightweight student model trained on low-density data to learn from a teacher model trained on high-density data during pre-training and fine-tuning. This method effectively handles missing electrodes through contrastive distillation. We validate the effectiveness of EEG-DisGCMAE across four classification tasks using two clinical EEG datasets with abundant data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wei25q, title = {Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for {EEG}}, author = {Wei, Xinxu and Zhao, Kanhao and Jiao, Yong and Xie, Hua and He, Lifang and Zhang, Yu}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {66358--66377}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wei25q/wei25q.pdf}, url = {https://proceedings.mlr.press/v267/wei25q.html}, abstract = {Effectively utilizing extensive unlabeled high-density EEG data to improve performance in scenarios with limited labeled low-density EEG data presents a significant challenge. In this paper, we address this challenge by formulating it as a graph transfer learning and knowledge distillation problem. We propose a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, named EEG-DisGCMAE, to bridge the gap between unlabeled and labeled as well as high- and low-density EEG data. Our approach introduces a novel unified graph self-supervised pre-training paradigm, which seamlessly integrates the graph contrastive pre-training with the graph masked autoencoder pre-training. Furthermore, we propose a graph topology distillation loss function, allowing a lightweight student model trained on low-density data to learn from a teacher model trained on high-density data during pre-training and fine-tuning. This method effectively handles missing electrodes through contrastive distillation. We validate the effectiveness of EEG-DisGCMAE across four classification tasks using two clinical EEG datasets with abundant data.} }
Endnote
%0 Conference Paper %T Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG %A Xinxu Wei %A Kanhao Zhao %A Yong Jiao %A Hua Xie %A Lifang He %A Yu Zhang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wei25q %I PMLR %P 66358--66377 %U https://proceedings.mlr.press/v267/wei25q.html %V 267 %X Effectively utilizing extensive unlabeled high-density EEG data to improve performance in scenarios with limited labeled low-density EEG data presents a significant challenge. In this paper, we address this challenge by formulating it as a graph transfer learning and knowledge distillation problem. We propose a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, named EEG-DisGCMAE, to bridge the gap between unlabeled and labeled as well as high- and low-density EEG data. Our approach introduces a novel unified graph self-supervised pre-training paradigm, which seamlessly integrates the graph contrastive pre-training with the graph masked autoencoder pre-training. Furthermore, we propose a graph topology distillation loss function, allowing a lightweight student model trained on low-density data to learn from a teacher model trained on high-density data during pre-training and fine-tuning. This method effectively handles missing electrodes through contrastive distillation. We validate the effectiveness of EEG-DisGCMAE across four classification tasks using two clinical EEG datasets with abundant data.
APA
Wei, X., Zhao, K., Jiao, Y., Xie, H., He, L. & Zhang, Y.. (2025). Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:66358-66377 Available from https://proceedings.mlr.press/v267/wei25q.html.

Related Material