LtXe-EnsNet: A Lightweight and Explainable Ensembled Deep Learning Model for Heart Sound Abnormality Classification From Sensor Data

IF 2.2 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
MD Toufiqur Rahman;Celia Shahnaz
{"title":"LtXe-EnsNet: A Lightweight and Explainable Ensembled Deep Learning Model for Heart Sound Abnormality Classification From Sensor Data","authors":"MD Toufiqur Rahman;Celia Shahnaz","doi":"10.1109/LSENS.2025.3596256","DOIUrl":null,"url":null,"abstract":"Cardiovascular diseases (CVDs), characterized by abnormalities in the heart, must be detected with high precision and in real-time. Phonocardiogram (PCG) signals are utilized for the detection of cardiac irregularities, thereby providing a crucial indicator for heart state monitoring in a noninvasive manner. The proposed research focuses on accurately identifying cardiovascular health by classifying the heart sounds in real-time sensor data. In this letter, an explainable and deep learning-based lightweight multifeature ensemble approach is proposed for the automated identification of CVDs from PCG signals collected using a digital stethoscope. Our method leverages the combined strengths of spectrogram and mel-frequency cepstral coefficient (MFCC) features to perform a multiclass classification task, with Grad-CAM providing visual explanations for model decisions. The proposed approach integrates both the spectrogram and MFCC features as inputs, channeling them through dedicated deep neural network-based feature extraction modules. The attention-based “MFCC-Module’’ extracts significant features from MFCC, while the spectro-module captures essential information from the spectrogram. By fusing these two feature sets, the architecture effectively classifies the signals. Our proposed robust lightweight model surpasses all other models, achieving an impressive accuracy of 99.5% for five-class classifications of PCG signal data from the sensor.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 9","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11119088/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Cardiovascular diseases (CVDs), characterized by abnormalities in the heart, must be detected with high precision and in real-time. Phonocardiogram (PCG) signals are utilized for the detection of cardiac irregularities, thereby providing a crucial indicator for heart state monitoring in a noninvasive manner. The proposed research focuses on accurately identifying cardiovascular health by classifying the heart sounds in real-time sensor data. In this letter, an explainable and deep learning-based lightweight multifeature ensemble approach is proposed for the automated identification of CVDs from PCG signals collected using a digital stethoscope. Our method leverages the combined strengths of spectrogram and mel-frequency cepstral coefficient (MFCC) features to perform a multiclass classification task, with Grad-CAM providing visual explanations for model decisions. The proposed approach integrates both the spectrogram and MFCC features as inputs, channeling them through dedicated deep neural network-based feature extraction modules. The attention-based “MFCC-Module’’ extracts significant features from MFCC, while the spectro-module captures essential information from the spectrogram. By fusing these two feature sets, the architecture effectively classifies the signals. Our proposed robust lightweight model surpasses all other models, achieving an impressive accuracy of 99.5% for five-class classifications of PCG signal data from the sensor.
ltxe - ennet:基于传感器数据的心音异常分类的轻量级可解释集成深度学习模型
以心脏异常为特征的心血管疾病(cvd)必须进行高精度实时检测。心音图(PCG)信号用于检测心脏不规则性,从而以无创方式提供心脏状态监测的关键指标。提出的研究重点是通过对实时传感器数据中的心音进行分类,准确识别心血管健康状况。在这封信中,提出了一种可解释的、基于深度学习的轻量级多特征集成方法,用于从使用数字听诊器收集的PCG信号中自动识别cvd。我们的方法利用谱图和mel-frequency倒谱系数(MFCC)特征的综合优势来执行多类分类任务,Grad-CAM为模型决策提供可视化解释。该方法将频谱图和MFCC特征作为输入,并通过专用的基于深度神经网络的特征提取模块进行引导。基于注意力的“MFCC模块”从MFCC中提取重要特征,而光谱模块从光谱图中捕获重要信息。通过融合这两个特征集,该结构可以有效地对信号进行分类。我们提出的鲁棒轻量级模型超越了所有其他模型,对来自传感器的PCG信号数据的五类分类实现了99.5%的令人印象深刻的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Sensors Letters
IEEE Sensors Letters Engineering-Electrical and Electronic Engineering
CiteScore
3.50
自引率
7.10%
发文量
194
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信