基于卷积神经网络的心音图特征提取与分类

Devjyoti Chakraborty, Snehangshu Bhattacharya, Ayush Thakur, A. R. Gosthipaty, Chira Datta
{"title":"基于卷积神经网络的心音图特征提取与分类","authors":"Devjyoti Chakraborty, Snehangshu Bhattacharya, Ayush Thakur, A. R. Gosthipaty, Chira Datta","doi":"10.1109/ICCE50343.2020.9290565","DOIUrl":null,"url":null,"abstract":"Heart auscultation is a primary and cost-effective form of clinical examination of the patient. Phonocardiogram (PCG) is a high-fidelity recording that captures the heart auscultation sound. PCG signal is used as a diagnostic test for evaluating the status of the heart and it helps in identifying related diseases.Automating this process would lead to a quicker examination of patients, especially in an environment where the doctor (specialist) to patient ratio is low. This research paper delves into an approach for extracting vital features from a Phonocardiogram and then classifying it into normal and abnormal classes using Deep Learning techniques. Our contributions include (a) Using class weights [1] a heavy class imbalance in the provided medical dataset, (b) Data transformation from an auditory perspective to a visual one (Spectrograms), (c) Using Deep Convolutional Neural Networks to extract features from the spectrogram and (d) Using the extracted features to classify the PCG signals in terms of quality (good vs. bad) and abnormality (normal vs. abnormal).The proposed algorithm achieved the overall score of 91.45% (91.86% sensitivity and 91.04% specificity) and 86.57% (89.78% sensitivity and 83.37% specificity) on train and test data respectively.","PeriodicalId":421963,"journal":{"name":"2020 IEEE 1st International Conference for Convergence in Engineering (ICCE)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Feature Extraction and Classification of Phonocardiograms using Convolutional Neural Networks\",\"authors\":\"Devjyoti Chakraborty, Snehangshu Bhattacharya, Ayush Thakur, A. R. Gosthipaty, Chira Datta\",\"doi\":\"10.1109/ICCE50343.2020.9290565\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Heart auscultation is a primary and cost-effective form of clinical examination of the patient. Phonocardiogram (PCG) is a high-fidelity recording that captures the heart auscultation sound. PCG signal is used as a diagnostic test for evaluating the status of the heart and it helps in identifying related diseases.Automating this process would lead to a quicker examination of patients, especially in an environment where the doctor (specialist) to patient ratio is low. This research paper delves into an approach for extracting vital features from a Phonocardiogram and then classifying it into normal and abnormal classes using Deep Learning techniques. Our contributions include (a) Using class weights [1] a heavy class imbalance in the provided medical dataset, (b) Data transformation from an auditory perspective to a visual one (Spectrograms), (c) Using Deep Convolutional Neural Networks to extract features from the spectrogram and (d) Using the extracted features to classify the PCG signals in terms of quality (good vs. bad) and abnormality (normal vs. abnormal).The proposed algorithm achieved the overall score of 91.45% (91.86% sensitivity and 91.04% specificity) and 86.57% (89.78% sensitivity and 83.37% specificity) on train and test data respectively.\",\"PeriodicalId\":421963,\"journal\":{\"name\":\"2020 IEEE 1st International Conference for Convergence in Engineering (ICCE)\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 1st International Conference for Convergence in Engineering (ICCE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCE50343.2020.9290565\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 1st International Conference for Convergence in Engineering (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE50343.2020.9290565","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

心脏听诊是对患者进行临床检查的一种主要和经济有效的形式。心音图(PCG)是一种捕捉心脏听诊声音的高保真录音。PCG信号被用作评估心脏状态的诊断测试,它有助于识别相关疾病。这一过程的自动化将导致对患者的更快检查,特别是在医生(专家)与患者比例较低的环境中。本文研究了一种从心音图中提取重要特征,然后使用深度学习技术将其分类为正常和异常类的方法。我们的贡献包括(a)在提供的医学数据集中使用类权重[1](严重的类不平衡),(b)从听觉角度到视觉角度的数据转换(谱图),(c)使用深度卷积神经网络从谱图中提取特征,(d)使用提取的特征根据质量(好与坏)和异常(正常与异常)对PCG信号进行分类。该算法在训练和测试数据上的总得分分别为91.45%(灵敏度为91.86%,特异性为91.04%)和86.57%(灵敏度为89.78%,特异性为83.37%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Feature Extraction and Classification of Phonocardiograms using Convolutional Neural Networks
Heart auscultation is a primary and cost-effective form of clinical examination of the patient. Phonocardiogram (PCG) is a high-fidelity recording that captures the heart auscultation sound. PCG signal is used as a diagnostic test for evaluating the status of the heart and it helps in identifying related diseases.Automating this process would lead to a quicker examination of patients, especially in an environment where the doctor (specialist) to patient ratio is low. This research paper delves into an approach for extracting vital features from a Phonocardiogram and then classifying it into normal and abnormal classes using Deep Learning techniques. Our contributions include (a) Using class weights [1] a heavy class imbalance in the provided medical dataset, (b) Data transformation from an auditory perspective to a visual one (Spectrograms), (c) Using Deep Convolutional Neural Networks to extract features from the spectrogram and (d) Using the extracted features to classify the PCG signals in terms of quality (good vs. bad) and abnormality (normal vs. abnormal).The proposed algorithm achieved the overall score of 91.45% (91.86% sensitivity and 91.04% specificity) and 86.57% (89.78% sensitivity and 83.37% specificity) on train and test data respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信