A novel convolutional interpretability model for pixel-level interpretation of medical image classification through fusion of machine learning and fuzzy logic
{"title":"A novel convolutional interpretability model for pixel-level interpretation of medical image classification through fusion of machine learning and fuzzy logic","authors":"Mohammad Ennab, Hamid Mcheick","doi":"10.1016/j.smhl.2024.100535","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) models for medical image analysis have achieved high diagnostic performance, but they often lack interpretability, limiting their clinical adoption. Existing methods can explain predictions at the image level, but they cannot provide pixel-level insights. This study proposes a novel fusion of machine learning and fuzzy logic to develop an interpretable model that can precisely identify discriminative image regions driving diagnostic decisions and generate heatmap visualization. The model is trained and evaluated on a dataset of CT scans containing healthy and diseased organ images. Quantitative features are extracted across pixels and normalized into representation matrices using a machine learning model. Subsequently, the contribution of each detected lesion to the overall prediction is quantified using fuzzy logic. Organ segment weighted averages are computed to identify significant lesions. The model explains application of AI in medical imaging with an unprecedented level of detail. It can explain fine-grained image areas that have the greatest influence on diagnostic outcomes by mapping raw image pixels to fuzzy membership concepts. Lesions are found with effect sizes and statistical significance (p < 0.05).</div><div>Our model outperforms three existing methods in terms of interpretability and diagnostic accuracy by 10–15%, while maintaining computational efficiency. By disclosing crucial image evidence that supports AI decisions, this interpretable model improves transparency and clinician trust. Ethical implications of integrating AI in clinical settings are discussed, and future research directions are outlined. This study significantly advances the development of safe and interpretable AI for enhancing patient care through imaging analytics.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100535"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352648324000916","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Health Professions","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) models for medical image analysis have achieved high diagnostic performance, but they often lack interpretability, limiting their clinical adoption. Existing methods can explain predictions at the image level, but they cannot provide pixel-level insights. This study proposes a novel fusion of machine learning and fuzzy logic to develop an interpretable model that can precisely identify discriminative image regions driving diagnostic decisions and generate heatmap visualization. The model is trained and evaluated on a dataset of CT scans containing healthy and diseased organ images. Quantitative features are extracted across pixels and normalized into representation matrices using a machine learning model. Subsequently, the contribution of each detected lesion to the overall prediction is quantified using fuzzy logic. Organ segment weighted averages are computed to identify significant lesions. The model explains application of AI in medical imaging with an unprecedented level of detail. It can explain fine-grained image areas that have the greatest influence on diagnostic outcomes by mapping raw image pixels to fuzzy membership concepts. Lesions are found with effect sizes and statistical significance (p < 0.05).
Our model outperforms three existing methods in terms of interpretability and diagnostic accuracy by 10–15%, while maintaining computational efficiency. By disclosing crucial image evidence that supports AI decisions, this interpretable model improves transparency and clinician trust. Ethical implications of integrating AI in clinical settings are discussed, and future research directions are outlined. This study significantly advances the development of safe and interpretable AI for enhancing patient care through imaging analytics.