Adil Gaouar , Souaad Hamza Cherif , Abdellatif Rahmoun , Mostafa El Habib Daho
{"title":"Explainable AI for early malaria detection using stacked-LSTM and attention mechanisms","authors":"Adil Gaouar , Souaad Hamza Cherif , Abdellatif Rahmoun , Mostafa El Habib Daho","doi":"10.1016/j.imu.2025.101667","DOIUrl":null,"url":null,"abstract":"<div><div>Malaria remains a global public health challenge, affecting more than 247 million people and causing 619,000 deaths worldwide in 2024 (according to WHO). Rapid diagnosis is essential for effective treatment and to improve patients’ chances of survival. In this study, we propose an interpretable deep learning framework for accurate malaria diagnosis using blood smear images. Also, We evaluate and compare several baseline deep learning (DL) models (fundamentals), customized VGG-16 and VGG-19, as well as newer DL models such as Vision Transformer (ViT) and MobileNet, and, for the first time, a stacked long-short-term memory network (stacked-LSTM) with an attention mechanism for automatic detection of malaria from blood smear images. These models were trained and validated on a publicly available dataset of over 27.000 labeled blood smear images. The comparative and statistical study conducted in this research showed us that the proposed Stacked-LSTM model with attention mechanism outperformed all other approaches, achieving a classification accuracy (0.9912), sensitivity, specificity, precision, F1 score (0.9911), and area under the curve (AUC) superior to all other models. Despite their solid performance, these models are often considered ”black boxes” due to their lack of transparency in the decision-making process, which poses significant challenges in medical applications and fields where human life is at stake. To address this, we have integrated explainable AI (XAI) techniques, namely Grad-CAM and LIME, to improve the model’s interpretability. Our results demonstrate the complementary value of combining high-performance deep learning models with XAI methods to enhance trust and certainty in AI-assisted medical diagnosis, suggesting that our model can support early and interpretable malaria detection in clinical environments.</div></div>","PeriodicalId":13953,"journal":{"name":"Informatics in Medicine Unlocked","volume":"57 ","pages":"Article 101667"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Informatics in Medicine Unlocked","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352914825000553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Malaria remains a global public health challenge, affecting more than 247 million people and causing 619,000 deaths worldwide in 2024 (according to WHO). Rapid diagnosis is essential for effective treatment and to improve patients’ chances of survival. In this study, we propose an interpretable deep learning framework for accurate malaria diagnosis using blood smear images. Also, We evaluate and compare several baseline deep learning (DL) models (fundamentals), customized VGG-16 and VGG-19, as well as newer DL models such as Vision Transformer (ViT) and MobileNet, and, for the first time, a stacked long-short-term memory network (stacked-LSTM) with an attention mechanism for automatic detection of malaria from blood smear images. These models were trained and validated on a publicly available dataset of over 27.000 labeled blood smear images. The comparative and statistical study conducted in this research showed us that the proposed Stacked-LSTM model with attention mechanism outperformed all other approaches, achieving a classification accuracy (0.9912), sensitivity, specificity, precision, F1 score (0.9911), and area under the curve (AUC) superior to all other models. Despite their solid performance, these models are often considered ”black boxes” due to their lack of transparency in the decision-making process, which poses significant challenges in medical applications and fields where human life is at stake. To address this, we have integrated explainable AI (XAI) techniques, namely Grad-CAM and LIME, to improve the model’s interpretability. Our results demonstrate the complementary value of combining high-performance deep learning models with XAI methods to enhance trust and certainty in AI-assisted medical diagnosis, suggesting that our model can support early and interpretable malaria detection in clinical environments.
期刊介绍:
Informatics in Medicine Unlocked (IMU) is an international gold open access journal covering a broad spectrum of topics within medical informatics, including (but not limited to) papers focusing on imaging, pathology, teledermatology, public health, ophthalmological, nursing and translational medicine informatics. The full papers that are published in the journal are accessible to all who visit the website.