Multi-View Self-Supervised Learning Enhances Automatic Sleep Staging from EEG Signals.

IF 4.4 2区 医学 Q2 ENGINEERING, BIOMEDICAL
Tianyou Yu, Xinxin Hu, Yanbin He, Wei Wu, Zhenghui Gu, Zhuliang Yu, Yuanqing Li, Fei Wang, Jun Xiao
{"title":"Multi-View Self-Supervised Learning Enhances Automatic Sleep Staging from EEG Signals.","authors":"Tianyou Yu, Xinxin Hu, Yanbin He, Wei Wu, Zhenghui Gu, Zhuliang Yu, Yuanqing Li, Fei Wang, Jun Xiao","doi":"10.1109/TBME.2025.3561228","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning-based methods for automatic sleep staging offer an efficient and objective alternative to costly manual scoring. However, their reliance on extensive labeled datasets and the challenge of generalization to new subjects and datasets limit their widespread adoption. Self-supervised learning (SSL) has emerged as a promising solution to address these issues by learning transferable representations from unlabeled data. This study highlights the effectiveness of SSL in automated sleep staging, utilizing a customized SSL approach to train a multi-view sleep staging model. This model includes a temporal view feature encoder for raw EEG signals and a spectral view feature encoder for time-frequency features. During pretraining, we incorporate a cross-view contrastive loss in addition to a contrastive loss for each view to learn complementary features and ensure consistency between views, enhancing the transferability and robustness of learned features. A dynamic weighting algorithm balances the learning speed of different loss components. Subsequently, these feature encoders, combined with a sequence encoder and a linear classifier, enable sleep staging after finetuning with labeled data. Evaluation on three publicly available datasets demonstrates that finetuning the entire SSL-pretrained model achieves competitive accuracy with state-of-the-art methods-86.4%, 83.8%, and 85.5% on SleepEDF-20, SleepEDF-78, and MASS datasets, respectively. Notably, our framework achieves near-equivalent performance with only 5% of the labeled data compared to full-label supervised training, showcasing SSL's potential to enhance automated sleep staging efficiency.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TBME.2025.3561228","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning-based methods for automatic sleep staging offer an efficient and objective alternative to costly manual scoring. However, their reliance on extensive labeled datasets and the challenge of generalization to new subjects and datasets limit their widespread adoption. Self-supervised learning (SSL) has emerged as a promising solution to address these issues by learning transferable representations from unlabeled data. This study highlights the effectiveness of SSL in automated sleep staging, utilizing a customized SSL approach to train a multi-view sleep staging model. This model includes a temporal view feature encoder for raw EEG signals and a spectral view feature encoder for time-frequency features. During pretraining, we incorporate a cross-view contrastive loss in addition to a contrastive loss for each view to learn complementary features and ensure consistency between views, enhancing the transferability and robustness of learned features. A dynamic weighting algorithm balances the learning speed of different loss components. Subsequently, these feature encoders, combined with a sequence encoder and a linear classifier, enable sleep staging after finetuning with labeled data. Evaluation on three publicly available datasets demonstrates that finetuning the entire SSL-pretrained model achieves competitive accuracy with state-of-the-art methods-86.4%, 83.8%, and 85.5% on SleepEDF-20, SleepEDF-78, and MASS datasets, respectively. Notably, our framework achieves near-equivalent performance with only 5% of the labeled data compared to full-label supervised training, showcasing SSL's potential to enhance automated sleep staging efficiency.

多视点自监督学习增强脑电信号自动睡眠分期。
基于深度学习的自动睡眠分期方法为昂贵的人工评分提供了一种有效和客观的选择。然而,它们对大量标记数据集的依赖以及对新主题和数据集的泛化的挑战限制了它们的广泛采用。自我监督学习(Self-supervised learning, SSL)已经成为解决这些问题的一个很有前途的解决方案,它可以从未标记的数据中学习可转移的表示。本研究强调了SSL在自动睡眠分期中的有效性,利用定制的SSL方法来训练多视图睡眠分期模型。该模型包括一个处理原始EEG信号的时间视图特征编码器和一个处理时频特征的频谱视图特征编码器。在预训练过程中,我们结合了跨视图对比损失和每个视图的对比损失来学习互补特征并确保视图之间的一致性,增强了学习特征的可转移性和鲁棒性。动态加权算法平衡了不同损失分量的学习速度。随后,这些特征编码器与序列编码器和线性分类器相结合,在使用标记数据进行微调后实现睡眠分期。对三个公开可用的数据集的评估表明,对整个ssl预训练模型进行微调,可以达到与最先进的方法相当的精度——在sleeppedf -20、sleeppedf -78和MASS数据集上分别达到86.4%、83.8%和85.5%。值得注意的是,与全标签监督训练相比,我们的框架仅使用5%的标记数据就实现了接近同等的性能,这显示了SSL提高自动化睡眠阶段效率的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Biomedical Engineering
IEEE Transactions on Biomedical Engineering 工程技术-工程:生物医学
CiteScore
9.40
自引率
4.30%
发文量
880
审稿时长
2.5 months
期刊介绍: IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信