Multilevel Inter-modal and Intra-modal Transformer network with domain adversarial learning for multimodal sleep staging.

IF 3.1 3区 工程技术 Q2 NEUROSCIENCES
Cognitive Neurodynamics Pub Date : 2025-12-01 Epub Date: 2025-05-26 DOI:10.1007/s11571-025-10262-w
Yang-Yang He, Jian-Wei Liu
{"title":"Multilevel Inter-modal and Intra-modal Transformer network with domain adversarial learning for multimodal sleep staging.","authors":"Yang-Yang He, Jian-Wei Liu","doi":"10.1007/s11571-025-10262-w","DOIUrl":null,"url":null,"abstract":"<p><p>Sleep staging identification is a fundamental task for the diagnosis of sleep disorders. With the development of biosensing technology and deep learning technology, it is possible to automatically decode sleep process through electroencephalogram signals. However, most sleep staging methods do not consider multimodal sleep signals such as electroencephalogram and electrooculograms signals simultaneously for sleep staging due to the limitation of performance improvement. To this regard, we design a Multilevel Inter-modal and Intra-modal Transformer network with domain adversarial learning for multimodal sleep staging, we introduce a multilevel Transformer structure to fully capture the temporal dependencies within sleep signals of each modality and the interdependencies among different modalities. Simultaneously, we strive for the multi-scale CNNs to learn time and frequency features separately. Our research promotes the application of Transformer models in the field of sleep staging identification. Moreover, considering individual differences among subjects, models trained on one group's data often perform poorly when applied to another group, known as the domain generalization problem. While domain adaptation methods are commonly used, fine-tuning on the target domain each time is cumbersome and impractical. To effectively address these issues without using target domain information, we introduce domain adversarial learning to help the model learn domain-invariant features for better generalization across domains. We validated the superiority of our model on two commonly used datasets, significantly outperforming other baseline models. Our model efficiently extracts dependencies of intra-modal level and inter-modal level from multimodal sleep data, making it suitable for scenarios requiring high accuracy.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"80"},"PeriodicalIF":3.1000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106285/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Neurodynamics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11571-025-10262-w","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/26 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Sleep staging identification is a fundamental task for the diagnosis of sleep disorders. With the development of biosensing technology and deep learning technology, it is possible to automatically decode sleep process through electroencephalogram signals. However, most sleep staging methods do not consider multimodal sleep signals such as electroencephalogram and electrooculograms signals simultaneously for sleep staging due to the limitation of performance improvement. To this regard, we design a Multilevel Inter-modal and Intra-modal Transformer network with domain adversarial learning for multimodal sleep staging, we introduce a multilevel Transformer structure to fully capture the temporal dependencies within sleep signals of each modality and the interdependencies among different modalities. Simultaneously, we strive for the multi-scale CNNs to learn time and frequency features separately. Our research promotes the application of Transformer models in the field of sleep staging identification. Moreover, considering individual differences among subjects, models trained on one group's data often perform poorly when applied to another group, known as the domain generalization problem. While domain adaptation methods are commonly used, fine-tuning on the target domain each time is cumbersome and impractical. To effectively address these issues without using target domain information, we introduce domain adversarial learning to help the model learn domain-invariant features for better generalization across domains. We validated the superiority of our model on two commonly used datasets, significantly outperforming other baseline models. Our model efficiently extracts dependencies of intra-modal level and inter-modal level from multimodal sleep data, making it suitable for scenarios requiring high accuracy.

基于领域对抗学习的多模态睡眠分期的多级模间和模内变压器网络。
睡眠阶段识别是诊断睡眠障碍的一项基本任务。随着生物传感技术和深度学习技术的发展,通过脑电图信号自动解码睡眠过程成为可能。然而,由于性能提高的限制,大多数睡眠分期方法并未同时考虑脑电图和眼电信号等多模态睡眠信号进行睡眠分期。为此,我们设计了一个具有多模态睡眠阶段领域对抗学习的多电平跨模态和模态内变压器网络,我们引入了一个多电平变压器结构,以充分捕捉每个模态睡眠信号中的时间依赖性以及不同模态之间的相互依赖性。同时,我们努力使多尺度cnn分别学习时间和频率特征。我们的研究促进了Transformer模型在睡眠阶段识别领域的应用。此外,考虑到受试者之间的个体差异,在一组数据上训练的模型在应用于另一组数据时往往表现不佳,这被称为领域泛化问题。虽然常用的领域自适应方法,但每次对目标领域进行微调既麻烦又不切实际。为了在不使用目标领域信息的情况下有效地解决这些问题,我们引入了领域对抗学习来帮助模型学习领域不变特征,以便更好地跨领域泛化。我们在两个常用的数据集上验证了我们模型的优越性,显著优于其他基线模型。我们的模型有效地从多模态睡眠数据中提取出模态内和模态间的依赖关系,使其适用于需要高精度的场景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Neurodynamics
Cognitive Neurodynamics 医学-神经科学
CiteScore
6.90
自引率
18.90%
发文量
140
审稿时长
12 months
期刊介绍: Cognitive Neurodynamics provides a unique forum of communication and cooperation for scientists and engineers working in the field of cognitive neurodynamics, intelligent science and applications, bridging the gap between theory and application, without any preference for pure theoretical, experimental or computational models. The emphasis is to publish original models of cognitive neurodynamics, novel computational theories and experimental results. In particular, intelligent science inspired by cognitive neuroscience and neurodynamics is also very welcome. The scope of Cognitive Neurodynamics covers cognitive neuroscience, neural computation based on dynamics, computer science, intelligent science as well as their interdisciplinary applications in the natural and engineering sciences. Papers that are appropriate for non-specialist readers are encouraged. 1. There is no page limit for manuscripts submitted to Cognitive Neurodynamics. Research papers should clearly represent an important advance of especially broad interest to researchers and technologists in neuroscience, biophysics, BCI, neural computer and intelligent robotics. 2. Cognitive Neurodynamics also welcomes brief communications: short papers reporting results that are of genuinely broad interest but that for one reason and another do not make a sufficiently complete story to justify a full article publication. Brief Communications should consist of approximately four manuscript pages. 3. Cognitive Neurodynamics publishes review articles in which a specific field is reviewed through an exhaustive literature survey. There are no restrictions on the number of pages. Review articles are usually invited, but submitted reviews will also be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信