WatchSleepNet: A Novel Model and Pretraining Approach for Advancing Sleep Staging with Smartwatches

Will Ke Wang, Bill Chen, Jiamu Yang, Hayoung Jeong, Leeor Hershkovich, Shekh Md Mahmudul Islam, Mengde Liu, Ali R Roghanizad, Md Mobashir Hasan Shandhi, Andrew R Spector, Jessilyn Dunn
Proceedings of the sixth Conference on Health, Inference, and Learning, PMLR 287:145-165, 2025.

Abstract

Sleep monitoring is essential for assessing overall health and managing sleep disorders, yet clinical adoption of consumer wearables remains limited due to inconsistent performance and scarce open source datasets and transparent codebase. In this study, we introduce WatchSleepNet, a novel, open-source three-stage sleep staging algorithm. The model uses sequence-to-sequence architecture integrating Residual Networks (ResNet), Temporal Convolutional Networks (TCN), and Long Short-Term Memory (LSTM) networks with self-attention to effectively capture both spatial and temporal dependencies crucial for sleep staging. To address the limited availability of high-quality wearable photoplethysmography (PPG) datasets, WatchSleepNet leveraged inter-beat interval (IBI) signals as a shared representation across polysomnography (PSG) and photoplethysmography (PPG) modalities. By pretraining on large PSG datasets and fine-tuning on wrist-worn PPG signals, the model achieved a REM F1 score of 0.642 +/- 0.072 and a Cohen’s Kappa of 0.684 +/- 0.051, surpassing previous state-of-the-art methods. To promote transparency and further research, we publicly release our model and codebase, advancing reproducibility and accessibility in wearable sleep research and enabling the development for more robust, clinically viable sleep monitoring solutions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v287-wang25a, title = {WatchSleepNet: A Novel Model and Pretraining Approach for Advancing Sleep Staging with Smartwatches}, author = {Wang, Will Ke and Chen, Bill and Yang, Jiamu and Jeong, Hayoung and Hershkovich, Leeor and Islam, Shekh Md Mahmudul and Liu, Mengde and Roghanizad, Ali R and Shandhi, Md Mobashir Hasan and Spector, Andrew R and Dunn, Jessilyn}, booktitle = {Proceedings of the sixth Conference on Health, Inference, and Learning}, pages = {145--165}, year = {2025}, editor = {Xu, Xuhai Orson and Choi, Edward and Singhal, Pankhuri and Gerych, Walter and Tang, Shengpu and Agrawal, Monica and Subbaswamy, Adarsh and Sizikova, Elena and Dunn, Jessilyn and Daneshjou, Roxana and Sarker, Tasmie and McDermott, Matthew and Chen, Irene}, volume = {287}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v287/main/assets/wang25a/wang25a.pdf}, url = {https://proceedings.mlr.press/v287/wang25a.html}, abstract = {Sleep monitoring is essential for assessing overall health and managing sleep disorders, yet clinical adoption of consumer wearables remains limited due to inconsistent performance and scarce open source datasets and transparent codebase. In this study, we introduce WatchSleepNet, a novel, open-source three-stage sleep staging algorithm. The model uses sequence-to-sequence architecture integrating Residual Networks (ResNet), Temporal Convolutional Networks (TCN), and Long Short-Term Memory (LSTM) networks with self-attention to effectively capture both spatial and temporal dependencies crucial for sleep staging. To address the limited availability of high-quality wearable photoplethysmography (PPG) datasets, WatchSleepNet leveraged inter-beat interval (IBI) signals as a shared representation across polysomnography (PSG) and photoplethysmography (PPG) modalities. By pretraining on large PSG datasets and fine-tuning on wrist-worn PPG signals, the model achieved a REM F1 score of 0.642 +/- 0.072 and a Cohen’s Kappa of 0.684 +/- 0.051, surpassing previous state-of-the-art methods. To promote transparency and further research, we publicly release our model and codebase, advancing reproducibility and accessibility in wearable sleep research and enabling the development for more robust, clinically viable sleep monitoring solutions.} }
Endnote
%0 Conference Paper %T WatchSleepNet: A Novel Model and Pretraining Approach for Advancing Sleep Staging with Smartwatches %A Will Ke Wang %A Bill Chen %A Jiamu Yang %A Hayoung Jeong %A Leeor Hershkovich %A Shekh Md Mahmudul Islam %A Mengde Liu %A Ali R Roghanizad %A Md Mobashir Hasan Shandhi %A Andrew R Spector %A Jessilyn Dunn %B Proceedings of the sixth Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2025 %E Xuhai Orson Xu %E Edward Choi %E Pankhuri Singhal %E Walter Gerych %E Shengpu Tang %E Monica Agrawal %E Adarsh Subbaswamy %E Elena Sizikova %E Jessilyn Dunn %E Roxana Daneshjou %E Tasmie Sarker %E Matthew McDermott %E Irene Chen %F pmlr-v287-wang25a %I PMLR %P 145--165 %U https://proceedings.mlr.press/v287/wang25a.html %V 287 %X Sleep monitoring is essential for assessing overall health and managing sleep disorders, yet clinical adoption of consumer wearables remains limited due to inconsistent performance and scarce open source datasets and transparent codebase. In this study, we introduce WatchSleepNet, a novel, open-source three-stage sleep staging algorithm. The model uses sequence-to-sequence architecture integrating Residual Networks (ResNet), Temporal Convolutional Networks (TCN), and Long Short-Term Memory (LSTM) networks with self-attention to effectively capture both spatial and temporal dependencies crucial for sleep staging. To address the limited availability of high-quality wearable photoplethysmography (PPG) datasets, WatchSleepNet leveraged inter-beat interval (IBI) signals as a shared representation across polysomnography (PSG) and photoplethysmography (PPG) modalities. By pretraining on large PSG datasets and fine-tuning on wrist-worn PPG signals, the model achieved a REM F1 score of 0.642 +/- 0.072 and a Cohen’s Kappa of 0.684 +/- 0.051, surpassing previous state-of-the-art methods. To promote transparency and further research, we publicly release our model and codebase, advancing reproducibility and accessibility in wearable sleep research and enabling the development for more robust, clinically viable sleep monitoring solutions.
APA
Wang, W.K., Chen, B., Yang, J., Jeong, H., Hershkovich, L., Islam, S.M.M., Liu, M., Roghanizad, A.R., Shandhi, M.M.H., Spector, A.R. & Dunn, J.. (2025). WatchSleepNet: A Novel Model and Pretraining Approach for Advancing Sleep Staging with Smartwatches. Proceedings of the sixth Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 287:145-165 Available from https://proceedings.mlr.press/v287/wang25a.html.

Related Material