K. M. Abdelgaber, Mostafa Salah, O. Omer, Ahmed E. A. Farghal, Ahmed S. A. Mubarak
{"title":"Subject-Independent per Beat PPG to Single-Lead ECG Mapping","authors":"K. M. Abdelgaber, Mostafa Salah, O. Omer, Ahmed E. A. Farghal, Ahmed S. A. Mubarak","doi":"10.3390/info14070377","DOIUrl":null,"url":null,"abstract":"In this paper, a beat-based autoencoder is proposed for mapping photoplethysmography (PPG) to a single-lead electrocardiogram (single-lead ECG) signal. The main limiting factors represented in uncleaned data, subject dependency, and erroneous beat segmentation are regarded. The dataset is cleaned by a two-stage clustering approach. Rather than complete single–lead ECG signal reconstruction, a beat-based PPG-to-single-lead-ECG (PPG2ECG) conversion is introduced for providing a simple lightweight model that meets the computational capabilities of wearable devices. In addition, peak-to-peak segmentation is employed for alleviating errors in PPG onset detection. Furthermore, subject-dependent training is highlighted as a critical factor in training procedures because most existing work includes different beats/signals from the same subject’s record in both training and testing sets. So, we provide a completely subject-independent model where the testing subjects’ records are hidden in the training stage entirely, i.e., a subject record appears once either in the training or testing set, but testing beats/signals belong to records that never appear in the training set. The proposed deep learning model is designed for providing efficient feature extraction that attains high reconstruction quality over subject-independent scenarios. The achieved performance is about 0.92 for the correlation coefficient and 0.0086 for the mean square error for the dataset extracted/cleaned from the MIMIC II dataset.","PeriodicalId":13622,"journal":{"name":"Inf. Comput.","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Comput.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14070377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, a beat-based autoencoder is proposed for mapping photoplethysmography (PPG) to a single-lead electrocardiogram (single-lead ECG) signal. The main limiting factors represented in uncleaned data, subject dependency, and erroneous beat segmentation are regarded. The dataset is cleaned by a two-stage clustering approach. Rather than complete single–lead ECG signal reconstruction, a beat-based PPG-to-single-lead-ECG (PPG2ECG) conversion is introduced for providing a simple lightweight model that meets the computational capabilities of wearable devices. In addition, peak-to-peak segmentation is employed for alleviating errors in PPG onset detection. Furthermore, subject-dependent training is highlighted as a critical factor in training procedures because most existing work includes different beats/signals from the same subject’s record in both training and testing sets. So, we provide a completely subject-independent model where the testing subjects’ records are hidden in the training stage entirely, i.e., a subject record appears once either in the training or testing set, but testing beats/signals belong to records that never appear in the training set. The proposed deep learning model is designed for providing efficient feature extraction that attains high reconstruction quality over subject-independent scenarios. The achieved performance is about 0.92 for the correlation coefficient and 0.0086 for the mean square error for the dataset extracted/cleaned from the MIMIC II dataset.