受试者独立的每拍PPG到单导联心电图映射

Inf. Comput. Pub Date : 2023-07-03 DOI:10.3390/info14070377
K. M. Abdelgaber, Mostafa Salah, O. Omer, Ahmed E. A. Farghal, Ahmed S. A. Mubarak
{"title":"受试者独立的每拍PPG到单导联心电图映射","authors":"K. M. Abdelgaber, Mostafa Salah, O. Omer, Ahmed E. A. Farghal, Ahmed S. A. Mubarak","doi":"10.3390/info14070377","DOIUrl":null,"url":null,"abstract":"In this paper, a beat-based autoencoder is proposed for mapping photoplethysmography (PPG) to a single-lead electrocardiogram (single-lead ECG) signal. The main limiting factors represented in uncleaned data, subject dependency, and erroneous beat segmentation are regarded. The dataset is cleaned by a two-stage clustering approach. Rather than complete single–lead ECG signal reconstruction, a beat-based PPG-to-single-lead-ECG (PPG2ECG) conversion is introduced for providing a simple lightweight model that meets the computational capabilities of wearable devices. In addition, peak-to-peak segmentation is employed for alleviating errors in PPG onset detection. Furthermore, subject-dependent training is highlighted as a critical factor in training procedures because most existing work includes different beats/signals from the same subject’s record in both training and testing sets. So, we provide a completely subject-independent model where the testing subjects’ records are hidden in the training stage entirely, i.e., a subject record appears once either in the training or testing set, but testing beats/signals belong to records that never appear in the training set. The proposed deep learning model is designed for providing efficient feature extraction that attains high reconstruction quality over subject-independent scenarios. The achieved performance is about 0.92 for the correlation coefficient and 0.0086 for the mean square error for the dataset extracted/cleaned from the MIMIC II dataset.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Subject-Independent per Beat PPG to Single-Lead ECG Mapping\",\"authors\":\"K. M. Abdelgaber, Mostafa Salah, O. Omer, Ahmed E. A. Farghal, Ahmed S. A. Mubarak\",\"doi\":\"10.3390/info14070377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a beat-based autoencoder is proposed for mapping photoplethysmography (PPG) to a single-lead electrocardiogram (single-lead ECG) signal. The main limiting factors represented in uncleaned data, subject dependency, and erroneous beat segmentation are regarded. The dataset is cleaned by a two-stage clustering approach. Rather than complete single–lead ECG signal reconstruction, a beat-based PPG-to-single-lead-ECG (PPG2ECG) conversion is introduced for providing a simple lightweight model that meets the computational capabilities of wearable devices. In addition, peak-to-peak segmentation is employed for alleviating errors in PPG onset detection. Furthermore, subject-dependent training is highlighted as a critical factor in training procedures because most existing work includes different beats/signals from the same subject’s record in both training and testing sets. So, we provide a completely subject-independent model where the testing subjects’ records are hidden in the training stage entirely, i.e., a subject record appears once either in the training or testing set, but testing beats/signals belong to records that never appear in the training set. The proposed deep learning model is designed for providing efficient feature extraction that attains high reconstruction quality over subject-independent scenarios. The achieved performance is about 0.92 for the correlation coefficient and 0.0086 for the mean square error for the dataset extracted/cleaned from the MIMIC II dataset.\",\"PeriodicalId\":13622,\"journal\":{\"name\":\"Inf. Comput.\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inf. Comput.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/info14070377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14070377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一种基于节拍的自编码器,用于将光容积脉搏波(PPG)映射到单导联心电图(single-lead ECG)信号。考虑了未清理数据、主题依赖性和错误节拍分割等主要限制因素。数据集通过两阶段聚类方法进行清理。本文介绍了一种基于节拍的ppg -to-single-lead ECG (PPG2ECG)转换,而不是完整的单导联心电信号重建,以提供一个简单的轻量级模型,满足可穿戴设备的计算能力。此外,采用峰对峰分割来减轻PPG发作检测的错误。此外,受试者依赖性训练被强调为训练程序中的一个关键因素,因为大多数现有的工作包括来自同一受试者在训练和测试集记录的不同节拍/信号。因此,我们提供了一个完全独立于受试者的模型,其中测试受试者的记录完全隐藏在训练阶段,即受试者记录在训练或测试集中出现一次,但测试节拍/信号属于从未出现在训练集中的记录。所提出的深度学习模型旨在提供高效的特征提取,从而在独立于主题的场景中获得高质量的重建。对于从MIMIC II数据集中提取/清理的数据集,实现的性能相关系数约为0.92,均方误差约为0.0086。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Subject-Independent per Beat PPG to Single-Lead ECG Mapping
In this paper, a beat-based autoencoder is proposed for mapping photoplethysmography (PPG) to a single-lead electrocardiogram (single-lead ECG) signal. The main limiting factors represented in uncleaned data, subject dependency, and erroneous beat segmentation are regarded. The dataset is cleaned by a two-stage clustering approach. Rather than complete single–lead ECG signal reconstruction, a beat-based PPG-to-single-lead-ECG (PPG2ECG) conversion is introduced for providing a simple lightweight model that meets the computational capabilities of wearable devices. In addition, peak-to-peak segmentation is employed for alleviating errors in PPG onset detection. Furthermore, subject-dependent training is highlighted as a critical factor in training procedures because most existing work includes different beats/signals from the same subject’s record in both training and testing sets. So, we provide a completely subject-independent model where the testing subjects’ records are hidden in the training stage entirely, i.e., a subject record appears once either in the training or testing set, but testing beats/signals belong to records that never appear in the training set. The proposed deep learning model is designed for providing efficient feature extraction that attains high reconstruction quality over subject-independent scenarios. The achieved performance is about 0.92 for the correlation coefficient and 0.0086 for the mean square error for the dataset extracted/cleaned from the MIMIC II dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信