Devjyoti Chakraborty, Snehangshu Bhattacharya, Ayush Thakur, A. R. Gosthipaty, Chira Datta
{"title":"Feature Extraction and Classification of Phonocardiograms using Convolutional Neural Networks","authors":"Devjyoti Chakraborty, Snehangshu Bhattacharya, Ayush Thakur, A. R. Gosthipaty, Chira Datta","doi":"10.1109/ICCE50343.2020.9290565","DOIUrl":null,"url":null,"abstract":"Heart auscultation is a primary and cost-effective form of clinical examination of the patient. Phonocardiogram (PCG) is a high-fidelity recording that captures the heart auscultation sound. PCG signal is used as a diagnostic test for evaluating the status of the heart and it helps in identifying related diseases.Automating this process would lead to a quicker examination of patients, especially in an environment where the doctor (specialist) to patient ratio is low. This research paper delves into an approach for extracting vital features from a Phonocardiogram and then classifying it into normal and abnormal classes using Deep Learning techniques. Our contributions include (a) Using class weights [1] a heavy class imbalance in the provided medical dataset, (b) Data transformation from an auditory perspective to a visual one (Spectrograms), (c) Using Deep Convolutional Neural Networks to extract features from the spectrogram and (d) Using the extracted features to classify the PCG signals in terms of quality (good vs. bad) and abnormality (normal vs. abnormal).The proposed algorithm achieved the overall score of 91.45% (91.86% sensitivity and 91.04% specificity) and 86.57% (89.78% sensitivity and 83.37% specificity) on train and test data respectively.","PeriodicalId":421963,"journal":{"name":"2020 IEEE 1st International Conference for Convergence in Engineering (ICCE)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 1st International Conference for Convergence in Engineering (ICCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE50343.2020.9290565","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Heart auscultation is a primary and cost-effective form of clinical examination of the patient. Phonocardiogram (PCG) is a high-fidelity recording that captures the heart auscultation sound. PCG signal is used as a diagnostic test for evaluating the status of the heart and it helps in identifying related diseases.Automating this process would lead to a quicker examination of patients, especially in an environment where the doctor (specialist) to patient ratio is low. This research paper delves into an approach for extracting vital features from a Phonocardiogram and then classifying it into normal and abnormal classes using Deep Learning techniques. Our contributions include (a) Using class weights [1] a heavy class imbalance in the provided medical dataset, (b) Data transformation from an auditory perspective to a visual one (Spectrograms), (c) Using Deep Convolutional Neural Networks to extract features from the spectrogram and (d) Using the extracted features to classify the PCG signals in terms of quality (good vs. bad) and abnormality (normal vs. abnormal).The proposed algorithm achieved the overall score of 91.45% (91.86% sensitivity and 91.04% specificity) and 86.57% (89.78% sensitivity and 83.37% specificity) on train and test data respectively.