{"title":"Early Screening of Valvular Heart Disease Prediction using CNN-based Mobile Network","authors":"Tanmay Sinha Roy, J. K. Roy, N. Mandal","doi":"10.1109/ICCECE51049.2023.10085513","DOIUrl":null,"url":null,"abstract":"The rapid emergence of technology and big data science opened up a significant amount of work that has been carried out in the field of feature extraction and classification techniques of heart sound using various deep learning methods. Practically, medical practitioners use the same old scientific method and practice to seek out any cardiac disorders and predict any abnormality in the human heart. Heart sound normalization, denoising, segmentation, feature extraction, and classification techniques provide a suitable way of study of phonocardiography (PCG) signal analysis which eventually reduces the cost, makes the system compact, and simultaneously, can work with extensive training data. This paper mainly indulges in two parts feature extraction and classification. The proposed deep learning study for PCG signal used online available heart disease datasets, and time domain features like average energy, power, root mean square (RMS), total harmonic distortion, and zero Crossing rates are used. Statistical features used are kurtosis and skewness. The acoustic features used are Mel-frequency cepstrum coefficients (MFCCs), mel, chroma, contrast, and tonnetz. For the classification of heart sound, the proposed modified CNN-based mobile network is used. The modified CNN-based mobile network is very effective in heart sound analysis as it requires very less computation time and storage. The proposed CNN-based modified Mobile Network model attained an accuracy of 99.04 + 0.07% on the test dataset with a sensitivity of 96.8 + 0.03 % and specificity of 97.2 + 0.09%.","PeriodicalId":447131,"journal":{"name":"2023 International Conference on Computer, Electrical & Communication Engineering (ICCECE)","volume":"441 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Computer, Electrical & Communication Engineering (ICCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCECE51049.2023.10085513","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The rapid emergence of technology and big data science opened up a significant amount of work that has been carried out in the field of feature extraction and classification techniques of heart sound using various deep learning methods. Practically, medical practitioners use the same old scientific method and practice to seek out any cardiac disorders and predict any abnormality in the human heart. Heart sound normalization, denoising, segmentation, feature extraction, and classification techniques provide a suitable way of study of phonocardiography (PCG) signal analysis which eventually reduces the cost, makes the system compact, and simultaneously, can work with extensive training data. This paper mainly indulges in two parts feature extraction and classification. The proposed deep learning study for PCG signal used online available heart disease datasets, and time domain features like average energy, power, root mean square (RMS), total harmonic distortion, and zero Crossing rates are used. Statistical features used are kurtosis and skewness. The acoustic features used are Mel-frequency cepstrum coefficients (MFCCs), mel, chroma, contrast, and tonnetz. For the classification of heart sound, the proposed modified CNN-based mobile network is used. The modified CNN-based mobile network is very effective in heart sound analysis as it requires very less computation time and storage. The proposed CNN-based modified Mobile Network model attained an accuracy of 99.04 + 0.07% on the test dataset with a sensitivity of 96.8 + 0.03 % and specificity of 97.2 + 0.09%.