多时相信息融合的深度注意网络用于睡眠呼吸暂停检测

IF 2.7 Q3 ENGINEERING, BIOMEDICAL
Meng Jiao;Changyue Song;Xiaochen Xian;Shihao Yang;Feng Liu
{"title":"多时相信息融合的深度注意网络用于睡眠呼吸暂停检测","authors":"Meng Jiao;Changyue Song;Xiaochen Xian;Shihao Yang;Feng Liu","doi":"10.1109/OJEMB.2024.3405666","DOIUrl":null,"url":null,"abstract":"Sleep Apnea (SA) is a prevalent sleep disorder with multifaceted etiologies that can have severe consequences for patients. Diagnosing SA traditionally relies on the in-laboratory polysomnogram (PSG), which records various human physiological activities overnight. SA diagnosis involves manual scoring by qualified physicians. Traditional machine learning methods for SA detection depend on hand-crafted features, making feature selection pivotal for downstream classification tasks. In recent years, deep learning has gained popularity in SA detection due to its capability for automatic feature extraction and superior classification accuracy. This study introduces a Deep Attention Network with Multi-Temporal Information Fusion (DAN-MTIF) for SA detection using single-lead electrocardiogram (ECG) signals. This framework utilizes three 1D convolutional neural network (CNN) blocks to extract features from R-R intervals and R-peak amplitudes using segments of varying lengths. Recognizing that features derived from different temporal scales vary in their contribution to classification, we integrate a multi-head attention module with a self-attention mechanism to learn the weights for each feature vector. Comprehensive experiments and comparisons between two paradigms of classical machine learning approaches and deep learning approaches are conducted. Our experiment results demonstrate that (1) compared with benchmark methods, the proposed DAN-MTIF exhibits excellent performance with 0.9106 accuracy, 0.9396 precision, 0.8470 sensitivity, 0.9588 specificity, and 0.8909 \n<inline-formula><tex-math>$F_{1}$</tex-math></inline-formula>\n score at per-segment level; (2) DAN-MTIF can effectively extract features with a higher degree of discrimination from ECG segments of multiple timescales than those with a single time scale, ensuring a better SA detection performance; (3) the overall performance of deep learning methods is better than the classical machine learning algorithms, highlighting the superior performance of deep learning approaches for SA detection.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"5 ","pages":"792-802"},"PeriodicalIF":2.7000,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539178","citationCount":"0","resultStr":"{\"title\":\"Deep Attention Networks With Multi-Temporal Information Fusion for Sleep Apnea Detection\",\"authors\":\"Meng Jiao;Changyue Song;Xiaochen Xian;Shihao Yang;Feng Liu\",\"doi\":\"10.1109/OJEMB.2024.3405666\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sleep Apnea (SA) is a prevalent sleep disorder with multifaceted etiologies that can have severe consequences for patients. Diagnosing SA traditionally relies on the in-laboratory polysomnogram (PSG), which records various human physiological activities overnight. SA diagnosis involves manual scoring by qualified physicians. Traditional machine learning methods for SA detection depend on hand-crafted features, making feature selection pivotal for downstream classification tasks. In recent years, deep learning has gained popularity in SA detection due to its capability for automatic feature extraction and superior classification accuracy. This study introduces a Deep Attention Network with Multi-Temporal Information Fusion (DAN-MTIF) for SA detection using single-lead electrocardiogram (ECG) signals. This framework utilizes three 1D convolutional neural network (CNN) blocks to extract features from R-R intervals and R-peak amplitudes using segments of varying lengths. Recognizing that features derived from different temporal scales vary in their contribution to classification, we integrate a multi-head attention module with a self-attention mechanism to learn the weights for each feature vector. Comprehensive experiments and comparisons between two paradigms of classical machine learning approaches and deep learning approaches are conducted. Our experiment results demonstrate that (1) compared with benchmark methods, the proposed DAN-MTIF exhibits excellent performance with 0.9106 accuracy, 0.9396 precision, 0.8470 sensitivity, 0.9588 specificity, and 0.8909 \\n<inline-formula><tex-math>$F_{1}$</tex-math></inline-formula>\\n score at per-segment level; (2) DAN-MTIF can effectively extract features with a higher degree of discrimination from ECG segments of multiple timescales than those with a single time scale, ensuring a better SA detection performance; (3) the overall performance of deep learning methods is better than the classical machine learning algorithms, highlighting the superior performance of deep learning approaches for SA detection.\",\"PeriodicalId\":33825,\"journal\":{\"name\":\"IEEE Open Journal of Engineering in Medicine and Biology\",\"volume\":\"5 \",\"pages\":\"792-802\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539178\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of Engineering in Medicine and Biology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10539178/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Engineering in Medicine and Biology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10539178/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

睡眠呼吸暂停(SA)是一种普遍存在的睡眠障碍,其病因是多方面的,可对患者造成严重后果。睡眠呼吸暂停的诊断传统上依赖于实验室多导睡眠图(PSG),它记录了人体在一夜之间的各种生理活动。SA 诊断需要由合格的医生进行人工评分。用于 SA 检测的传统机器学习方法依赖于手工创建的特征,因此特征选择对于下游分类任务至关重要。近年来,深度学习因其自动提取特征的能力和出色的分类准确性,在 SA 检测中越来越受欢迎。本研究介绍了利用单导联心电图(ECG)信号进行 SA 检测的多时空信息融合深度注意力网络(DAN-MTIF)。该框架利用三个一维卷积神经网络(CNN)块,使用不同长度的片段从 R-R 间期和 R 峰振幅中提取特征。由于从不同时间尺度提取的特征对分类的贡献各不相同,我们将多头注意模块与自我注意机制相结合,以学习每个特征向量的权重。我们在经典机器学习方法和深度学习方法的两种范例之间进行了全面的实验和比较。实验结果表明:(1) 与基准方法相比,DAN-MTIF 的准确度为 0.9106、精确度为 0.9396、灵敏度为 0.8470、特异度为 0.9588 和 0.8909 $F_{1}$ 的得分;(2)DAN-MTIF 能有效地从多个时间尺度的心电图片段中提取比单一时间尺度的心电图片段具有更高辨别度的特征,保证了更好的 SA 检测性能;(3)深度学习方法的整体性能优于经典的机器学习算法,凸显了深度学习方法在 SA 检测中的优越性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Attention Networks With Multi-Temporal Information Fusion for Sleep Apnea Detection
Sleep Apnea (SA) is a prevalent sleep disorder with multifaceted etiologies that can have severe consequences for patients. Diagnosing SA traditionally relies on the in-laboratory polysomnogram (PSG), which records various human physiological activities overnight. SA diagnosis involves manual scoring by qualified physicians. Traditional machine learning methods for SA detection depend on hand-crafted features, making feature selection pivotal for downstream classification tasks. In recent years, deep learning has gained popularity in SA detection due to its capability for automatic feature extraction and superior classification accuracy. This study introduces a Deep Attention Network with Multi-Temporal Information Fusion (DAN-MTIF) for SA detection using single-lead electrocardiogram (ECG) signals. This framework utilizes three 1D convolutional neural network (CNN) blocks to extract features from R-R intervals and R-peak amplitudes using segments of varying lengths. Recognizing that features derived from different temporal scales vary in their contribution to classification, we integrate a multi-head attention module with a self-attention mechanism to learn the weights for each feature vector. Comprehensive experiments and comparisons between two paradigms of classical machine learning approaches and deep learning approaches are conducted. Our experiment results demonstrate that (1) compared with benchmark methods, the proposed DAN-MTIF exhibits excellent performance with 0.9106 accuracy, 0.9396 precision, 0.8470 sensitivity, 0.9588 specificity, and 0.8909 $F_{1}$ score at per-segment level; (2) DAN-MTIF can effectively extract features with a higher degree of discrimination from ECG segments of multiple timescales than those with a single time scale, ensuring a better SA detection performance; (3) the overall performance of deep learning methods is better than the classical machine learning algorithms, highlighting the superior performance of deep learning approaches for SA detection.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.50
自引率
3.40%
发文量
20
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of Engineering in Medicine and Biology (IEEE OJEMB) is dedicated to serving the community of innovators in medicine, technology, and the sciences, with the core goal of advancing the highest-quality interdisciplinary research between these disciplines. The journal firmly believes that the future of medicine depends on close collaboration between biology and technology, and that fostering interaction between these fields is an important way to advance key discoveries that can improve clinical care.IEEE OJEMB is a gold open access journal in which the authors retain the copyright to their papers and readers have free access to the full text and PDFs on the IEEE Xplore® Digital Library. However, authors are required to pay an article processing fee at the time their paper is accepted for publication, using to cover the cost of publication.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信