Zhuoyang Xu, Yangming Guo, Tingting Zhao, Zhuo Liu, Xingzhi Sun
{"title":"心电信号导联的多标记心脏异常分类","authors":"Zhuoyang Xu, Yangming Guo, Tingting Zhao, Zhuo Liu, Xingzhi Sun","doi":"10.23919/cinc53138.2021.9662746","DOIUrl":null,"url":null,"abstract":"As part of the PhysioNet/Computing in Cardiology Challenge 2021, Our team, HeartBeats, developed an ensembled model based on SE-ResNet for identifying 30 kinds of cardiac abnormalities from different lead combinations of electrocardiograms (ECGs). At pre-processing stage, ECGs were down-sampled to 500 Hz and each record is normalized using Z-Score normalization. We then employed several residual neural network modules with squeeze-and-excitation blocks to learn from the first 15-second segments of the signals. We designed a multi-label loss to emphasize the impact of wrong predictions during training. We relabelled the dataset which contains only 9 classes using our baseline model build in last year's challenge. Five-fold cross-validation was used to assess the performance of our models. Our classifiers received the scores of 0.58, 0.55, 0.56, 0.53, and 0.53 for the 12-lead, 6-lead, 4-lead, 3-lead, and 2-lead versions with the Challenge evaluation metric. Our final model performed well on the test data. However, the results were not officially ranked because our training code may select the offline pre-trained models rather than using the training data if the pre-trained models performed better than the trained models on the training data. The model can therefore fail to learn from new training data.","PeriodicalId":126746,"journal":{"name":"2021 Computing in Cardiology (CinC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Multi-Label Cardiac Abnormalities Classification on Selected Leads of ECG Signals\",\"authors\":\"Zhuoyang Xu, Yangming Guo, Tingting Zhao, Zhuo Liu, Xingzhi Sun\",\"doi\":\"10.23919/cinc53138.2021.9662746\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As part of the PhysioNet/Computing in Cardiology Challenge 2021, Our team, HeartBeats, developed an ensembled model based on SE-ResNet for identifying 30 kinds of cardiac abnormalities from different lead combinations of electrocardiograms (ECGs). At pre-processing stage, ECGs were down-sampled to 500 Hz and each record is normalized using Z-Score normalization. We then employed several residual neural network modules with squeeze-and-excitation blocks to learn from the first 15-second segments of the signals. We designed a multi-label loss to emphasize the impact of wrong predictions during training. We relabelled the dataset which contains only 9 classes using our baseline model build in last year's challenge. Five-fold cross-validation was used to assess the performance of our models. Our classifiers received the scores of 0.58, 0.55, 0.56, 0.53, and 0.53 for the 12-lead, 6-lead, 4-lead, 3-lead, and 2-lead versions with the Challenge evaluation metric. Our final model performed well on the test data. However, the results were not officially ranked because our training code may select the offline pre-trained models rather than using the training data if the pre-trained models performed better than the trained models on the training data. The model can therefore fail to learn from new training data.\",\"PeriodicalId\":126746,\"journal\":{\"name\":\"2021 Computing in Cardiology (CinC)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Computing in Cardiology (CinC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/cinc53138.2021.9662746\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Computing in Cardiology (CinC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/cinc53138.2021.9662746","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
摘要
作为PhysioNet/Computing in Cardiology Challenge 2021的一部分,我们的团队HeartBeats开发了一个基于SE-ResNet的集成模型,用于从不同的心电图(ecg)导联组合中识别30种心脏异常。在预处理阶段,将心电图降采样至500 Hz,并使用Z-Score归一化将每个记录归一化。然后,我们使用了几个带有挤压和激励块的残差神经网络模块,从信号的前15秒片段中学习。我们设计了一个多标签损失来强调训练过程中错误预测的影响。我们使用我们在去年的挑战中构建的基线模型重新标记了只包含9个类的数据集。使用五重交叉验证来评估我们的模型的性能。我们的分类器在挑战评估指标的12导联、6导联、4导联、3导联和2导联版本中获得的分数分别为0.58、0.55、0.56、0.53和0.53。我们的最终模型在测试数据上表现良好。然而,结果没有正式排名,因为如果预训练模型在训练数据上的表现优于训练模型,我们的训练代码可能会选择离线预训练模型,而不是使用训练数据。因此,该模型可能无法从新的训练数据中学习。
Multi-Label Cardiac Abnormalities Classification on Selected Leads of ECG Signals
As part of the PhysioNet/Computing in Cardiology Challenge 2021, Our team, HeartBeats, developed an ensembled model based on SE-ResNet for identifying 30 kinds of cardiac abnormalities from different lead combinations of electrocardiograms (ECGs). At pre-processing stage, ECGs were down-sampled to 500 Hz and each record is normalized using Z-Score normalization. We then employed several residual neural network modules with squeeze-and-excitation blocks to learn from the first 15-second segments of the signals. We designed a multi-label loss to emphasize the impact of wrong predictions during training. We relabelled the dataset which contains only 9 classes using our baseline model build in last year's challenge. Five-fold cross-validation was used to assess the performance of our models. Our classifiers received the scores of 0.58, 0.55, 0.56, 0.53, and 0.53 for the 12-lead, 6-lead, 4-lead, 3-lead, and 2-lead versions with the Challenge evaluation metric. Our final model performed well on the test data. However, the results were not officially ranked because our training code may select the offline pre-trained models rather than using the training data if the pre-trained models performed better than the trained models on the training data. The model can therefore fail to learn from new training data.