{"title":"Multi-View Self-Supervised Learning Enhances Automatic Sleep Staging from EEG Signals.","authors":"Tianyou Yu, Xinxin Hu, Yanbin He, Wei Wu, Zhenghui Gu, Zhuliang Yu, Yuanqing Li, Fei Wang, Jun Xiao","doi":"10.1109/TBME.2025.3561228","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning-based methods for automatic sleep staging offer an efficient and objective alternative to costly manual scoring. However, their reliance on extensive labeled datasets and the challenge of generalization to new subjects and datasets limit their widespread adoption. Self-supervised learning (SSL) has emerged as a promising solution to address these issues by learning transferable representations from unlabeled data. This study highlights the effectiveness of SSL in automated sleep staging, utilizing a customized SSL approach to train a multi-view sleep staging model. This model includes a temporal view feature encoder for raw EEG signals and a spectral view feature encoder for time-frequency features. During pretraining, we incorporate a cross-view contrastive loss in addition to a contrastive loss for each view to learn complementary features and ensure consistency between views, enhancing the transferability and robustness of learned features. A dynamic weighting algorithm balances the learning speed of different loss components. Subsequently, these feature encoders, combined with a sequence encoder and a linear classifier, enable sleep staging after finetuning with labeled data. Evaluation on three publicly available datasets demonstrates that finetuning the entire SSL-pretrained model achieves competitive accuracy with state-of-the-art methods-86.4%, 83.8%, and 85.5% on SleepEDF-20, SleepEDF-78, and MASS datasets, respectively. Notably, our framework achieves near-equivalent performance with only 5% of the labeled data compared to full-label supervised training, showcasing SSL's potential to enhance automated sleep staging efficiency.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TBME.2025.3561228","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning-based methods for automatic sleep staging offer an efficient and objective alternative to costly manual scoring. However, their reliance on extensive labeled datasets and the challenge of generalization to new subjects and datasets limit their widespread adoption. Self-supervised learning (SSL) has emerged as a promising solution to address these issues by learning transferable representations from unlabeled data. This study highlights the effectiveness of SSL in automated sleep staging, utilizing a customized SSL approach to train a multi-view sleep staging model. This model includes a temporal view feature encoder for raw EEG signals and a spectral view feature encoder for time-frequency features. During pretraining, we incorporate a cross-view contrastive loss in addition to a contrastive loss for each view to learn complementary features and ensure consistency between views, enhancing the transferability and robustness of learned features. A dynamic weighting algorithm balances the learning speed of different loss components. Subsequently, these feature encoders, combined with a sequence encoder and a linear classifier, enable sleep staging after finetuning with labeled data. Evaluation on three publicly available datasets demonstrates that finetuning the entire SSL-pretrained model achieves competitive accuracy with state-of-the-art methods-86.4%, 83.8%, and 85.5% on SleepEDF-20, SleepEDF-78, and MASS datasets, respectively. Notably, our framework achieves near-equivalent performance with only 5% of the labeled data compared to full-label supervised training, showcasing SSL's potential to enhance automated sleep staging efficiency.
期刊介绍:
IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.