XSleepFusion: A dual-stage information bottleneck fusion framework for interpretable multimodal sleep analysis

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shuaicong Hu , Yanan Wang , Jian Liu , Cuiwei Yang
{"title":"XSleepFusion: A dual-stage information bottleneck fusion framework for interpretable multimodal sleep analysis","authors":"Shuaicong Hu ,&nbsp;Yanan Wang ,&nbsp;Jian Liu ,&nbsp;Cuiwei Yang","doi":"10.1016/j.inffus.2025.103275","DOIUrl":null,"url":null,"abstract":"<div><div>Sleep disorders affect hundreds of millions globally, with accurate assessment of sleep apnea (SA) and sleep staging (SS) essential for clinical diagnosis and early intervention. Manual analysis by sleep experts is time-consuming and subject to inter-rater variability. Deep learning (DL) approaches offer automation potential but face fundamental challenges in multi-modal physiological signal integration and interpretability. This paper presents XSleepFusion, a cross-modal fusion framework based on information bottleneck (IB) theory for automated sleep analysis. The framework introduces a dual-stage IB mechanism that systematically processes physiological signals: first eliminating intra-modal redundancy, then optimizing cross-modal feature fusion. An evolutionary attention Transformer network (EAT-Net) backbone extracts temporal features at multiple scales, providing interpretable attention patterns. Experimental validation on eight clinical datasets comprising over 15,000 sleep recordings demonstrates the framework’s effectiveness in polysomnogram (PSG)-based SA detection, electrocariogram (ECG)-based SA detection, and SS. The architecture achieves superior generalization across varying signal qualities and modal combinations, while the dual-stage design enables flexible integration of diverse physiological signals. Through interpretable feature representations and robust cross-modal fusion capabilities, XSleepFusion establishes a reliable and adaptable foundation for clinical sleep monitoring.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103275"},"PeriodicalIF":14.7000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525003483","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Sleep disorders affect hundreds of millions globally, with accurate assessment of sleep apnea (SA) and sleep staging (SS) essential for clinical diagnosis and early intervention. Manual analysis by sleep experts is time-consuming and subject to inter-rater variability. Deep learning (DL) approaches offer automation potential but face fundamental challenges in multi-modal physiological signal integration and interpretability. This paper presents XSleepFusion, a cross-modal fusion framework based on information bottleneck (IB) theory for automated sleep analysis. The framework introduces a dual-stage IB mechanism that systematically processes physiological signals: first eliminating intra-modal redundancy, then optimizing cross-modal feature fusion. An evolutionary attention Transformer network (EAT-Net) backbone extracts temporal features at multiple scales, providing interpretable attention patterns. Experimental validation on eight clinical datasets comprising over 15,000 sleep recordings demonstrates the framework’s effectiveness in polysomnogram (PSG)-based SA detection, electrocariogram (ECG)-based SA detection, and SS. The architecture achieves superior generalization across varying signal qualities and modal combinations, while the dual-stage design enables flexible integration of diverse physiological signals. Through interpretable feature representations and robust cross-modal fusion capabilities, XSleepFusion establishes a reliable and adaptable foundation for clinical sleep monitoring.
XSleepFusion:用于可解释多模态睡眠分析的双阶段信息瓶颈融合框架
睡眠障碍影响着全球数亿人,准确评估睡眠呼吸暂停(SA)和睡眠分期(SS)对于临床诊断和早期干预至关重要。由睡眠专家进行的人工分析非常耗时,而且会受到评分者之间差异的影响。深度学习(DL)方法提供了自动化的潜力,但在多模态生理信号集成和可解释性方面面临着根本性的挑战。提出了一种基于信息瓶颈(IB)理论的跨模态融合框架XSleepFusion,用于自动化睡眠分析。该框架引入了一个双阶段IB机制,系统地处理生理信号:首先消除模态内冗余,然后优化跨模态特征融合。一个进化的注意力转换网络(EAT-Net)主干在多个尺度上提取时间特征,提供可解释的注意力模式。在包含超过15,000个睡眠记录的8个临床数据集上进行的实验验证表明,该框架在基于多导睡眠图(PSG)的SA检测、基于心电图(ECG)的SA检测和SS方面的有效性。该架构在不同的信号质量和模态组合中实现了卓越的泛化,而双阶段设计能够灵活地集成各种生理信号。通过可解释的特征表示和强大的跨模态融合功能,XSleepFusion为临床睡眠监测建立了可靠和适应性强的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信