{"title":"LtXe-EnsNet: A Lightweight and Explainable Ensembled Deep Learning Model for Heart Sound Abnormality Classification From Sensor Data","authors":"MD Toufiqur Rahman;Celia Shahnaz","doi":"10.1109/LSENS.2025.3596256","DOIUrl":null,"url":null,"abstract":"Cardiovascular diseases (CVDs), characterized by abnormalities in the heart, must be detected with high precision and in real-time. Phonocardiogram (PCG) signals are utilized for the detection of cardiac irregularities, thereby providing a crucial indicator for heart state monitoring in a noninvasive manner. The proposed research focuses on accurately identifying cardiovascular health by classifying the heart sounds in real-time sensor data. In this letter, an explainable and deep learning-based lightweight multifeature ensemble approach is proposed for the automated identification of CVDs from PCG signals collected using a digital stethoscope. Our method leverages the combined strengths of spectrogram and mel-frequency cepstral coefficient (MFCC) features to perform a multiclass classification task, with Grad-CAM providing visual explanations for model decisions. The proposed approach integrates both the spectrogram and MFCC features as inputs, channeling them through dedicated deep neural network-based feature extraction modules. The attention-based “MFCC-Module’’ extracts significant features from MFCC, while the spectro-module captures essential information from the spectrogram. By fusing these two feature sets, the architecture effectively classifies the signals. Our proposed robust lightweight model surpasses all other models, achieving an impressive accuracy of 99.5% for five-class classifications of PCG signal data from the sensor.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 9","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11119088/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Cardiovascular diseases (CVDs), characterized by abnormalities in the heart, must be detected with high precision and in real-time. Phonocardiogram (PCG) signals are utilized for the detection of cardiac irregularities, thereby providing a crucial indicator for heart state monitoring in a noninvasive manner. The proposed research focuses on accurately identifying cardiovascular health by classifying the heart sounds in real-time sensor data. In this letter, an explainable and deep learning-based lightweight multifeature ensemble approach is proposed for the automated identification of CVDs from PCG signals collected using a digital stethoscope. Our method leverages the combined strengths of spectrogram and mel-frequency cepstral coefficient (MFCC) features to perform a multiclass classification task, with Grad-CAM providing visual explanations for model decisions. The proposed approach integrates both the spectrogram and MFCC features as inputs, channeling them through dedicated deep neural network-based feature extraction modules. The attention-based “MFCC-Module’’ extracts significant features from MFCC, while the spectro-module captures essential information from the spectrogram. By fusing these two feature sets, the architecture effectively classifies the signals. Our proposed robust lightweight model surpasses all other models, achieving an impressive accuracy of 99.5% for five-class classifications of PCG signal data from the sensor.