Moshiur Rahman Tonmoy , Md. Atik Shams , Md. Akhtaruzzaman Adnan , M.F. Mridha , Mejdl Safran , Sultan Alfarhood , Dunren Che
{"title":"X-Brain:使用鲁棒深度注意 CNN 对脑肿瘤进行可解释识别","authors":"Moshiur Rahman Tonmoy , Md. Atik Shams , Md. Akhtaruzzaman Adnan , M.F. Mridha , Mejdl Safran , Sultan Alfarhood , Dunren Che","doi":"10.1016/j.bspc.2024.106988","DOIUrl":null,"url":null,"abstract":"<div><div>Automated brain tumor recognition is crucial for swift diagnosis and treatment in healthcare, enhancing patient survival rates but manual recognition of tumor types is time-consuming and resource-intensive. Over the past few years, researchers have proposed various Deep Learning (DL) methods to automate the recognition process over the past years. However, these approaches often lack convincing accuracy and rely on datasets consisting of limited samples, raising concerns regarding real-world efficacy and reliability. Furthermore, the decisions made by black-box AI models in healthcare, where lives are at stake, require proper decision explainability. To address these issues, we propose a robust and explainable deep CNN-based method for effective recognition of brain tumor types, attaining state-of-the-art accuracies of 99.81%, 99.55%, and 99.30% on the training, validation, and test sets, respectively, surpassing both the previous works and baseline models. Moreover, we employed three Explainable AI techniques: Grad-CAM, Grad-CAM++, and Score-CAM for explainability analysis, contributing towards the development of trustworthy and reliable automation of healthcare diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"X-Brain: Explainable recognition of brain tumors using robust deep attention CNN\",\"authors\":\"Moshiur Rahman Tonmoy , Md. Atik Shams , Md. Akhtaruzzaman Adnan , M.F. Mridha , Mejdl Safran , Sultan Alfarhood , Dunren Che\",\"doi\":\"10.1016/j.bspc.2024.106988\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Automated brain tumor recognition is crucial for swift diagnosis and treatment in healthcare, enhancing patient survival rates but manual recognition of tumor types is time-consuming and resource-intensive. Over the past few years, researchers have proposed various Deep Learning (DL) methods to automate the recognition process over the past years. However, these approaches often lack convincing accuracy and rely on datasets consisting of limited samples, raising concerns regarding real-world efficacy and reliability. Furthermore, the decisions made by black-box AI models in healthcare, where lives are at stake, require proper decision explainability. To address these issues, we propose a robust and explainable deep CNN-based method for effective recognition of brain tumor types, attaining state-of-the-art accuracies of 99.81%, 99.55%, and 99.30% on the training, validation, and test sets, respectively, surpassing both the previous works and baseline models. Moreover, we employed three Explainable AI techniques: Grad-CAM, Grad-CAM++, and Score-CAM for explainability analysis, contributing towards the development of trustworthy and reliable automation of healthcare diagnosis.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809424010462\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809424010462","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
X-Brain: Explainable recognition of brain tumors using robust deep attention CNN
Automated brain tumor recognition is crucial for swift diagnosis and treatment in healthcare, enhancing patient survival rates but manual recognition of tumor types is time-consuming and resource-intensive. Over the past few years, researchers have proposed various Deep Learning (DL) methods to automate the recognition process over the past years. However, these approaches often lack convincing accuracy and rely on datasets consisting of limited samples, raising concerns regarding real-world efficacy and reliability. Furthermore, the decisions made by black-box AI models in healthcare, where lives are at stake, require proper decision explainability. To address these issues, we propose a robust and explainable deep CNN-based method for effective recognition of brain tumor types, attaining state-of-the-art accuracies of 99.81%, 99.55%, and 99.30% on the training, validation, and test sets, respectively, surpassing both the previous works and baseline models. Moreover, we employed three Explainable AI techniques: Grad-CAM, Grad-CAM++, and Score-CAM for explainability analysis, contributing towards the development of trustworthy and reliable automation of healthcare diagnosis.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.