{"title":"Enhancing brain tumor MRI classification with an ensemble of deep learning models and transformer integration.","authors":"Nawal Benzorgat, Kewen Xia, Mustapha Noure Eddine Benzorgat","doi":"10.7717/peerj-cs.2425","DOIUrl":null,"url":null,"abstract":"<p><p>Brain tumors are widely recognized as the primary cause of cancer-related mortality globally, necessitating precise detection to enhance patient survival rates. The early identification of brain tumor is presented with significant challenges in the healthcare domain, necessitating the implementation of precise and efficient diagnostic methodologies. The manual identification and analysis of extensive MRI data are presented as a challenging and laborious task, compounded by the importance of early tumor detection in reducing mortality rates. Prompt initiation of treatment hinges upon identifying the specific tumor type in patients, emphasizing the urgency for a dependable deep learning methodology for precise diagnosis. In this research, a hybrid model is presented which integrates the strengths of both transfer learning and the transformer encoder mechanism. After the performance evaluation of the efficacy of six pre-existing deep learning model, both individually and in combination, it was determined that an ensemble of three pretrained models achieved the highest accuracy. This ensemble, comprising DenseNet201, GoogleNet (InceptionV3), and InceptionResNetV2, is selected as the feature extraction framework for the transformer encoder network. The transformer encoder module integrates a Shifted Window-based Self-Attention mechanism, sequential Self-Attention, with a multilayer perceptron layer (MLP). These experiments were conducted on three publicly available research datasets for evaluation purposes. The Cheng dataset, BT-large-2c, and BT-large-4c dataset, each designed for various classification tasks with differences in sample number, planes, and contrast. The model gives consistent results on all three datasets and reaches an accuracy of 99.34%, 99.16%, and 98.62%, respectively, which are improved compared to other techniques.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2425"},"PeriodicalIF":3.5000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623201/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2425","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Brain tumors are widely recognized as the primary cause of cancer-related mortality globally, necessitating precise detection to enhance patient survival rates. The early identification of brain tumor is presented with significant challenges in the healthcare domain, necessitating the implementation of precise and efficient diagnostic methodologies. The manual identification and analysis of extensive MRI data are presented as a challenging and laborious task, compounded by the importance of early tumor detection in reducing mortality rates. Prompt initiation of treatment hinges upon identifying the specific tumor type in patients, emphasizing the urgency for a dependable deep learning methodology for precise diagnosis. In this research, a hybrid model is presented which integrates the strengths of both transfer learning and the transformer encoder mechanism. After the performance evaluation of the efficacy of six pre-existing deep learning model, both individually and in combination, it was determined that an ensemble of three pretrained models achieved the highest accuracy. This ensemble, comprising DenseNet201, GoogleNet (InceptionV3), and InceptionResNetV2, is selected as the feature extraction framework for the transformer encoder network. The transformer encoder module integrates a Shifted Window-based Self-Attention mechanism, sequential Self-Attention, with a multilayer perceptron layer (MLP). These experiments were conducted on three publicly available research datasets for evaluation purposes. The Cheng dataset, BT-large-2c, and BT-large-4c dataset, each designed for various classification tasks with differences in sample number, planes, and contrast. The model gives consistent results on all three datasets and reaches an accuracy of 99.34%, 99.16%, and 98.62%, respectively, which are improved compared to other techniques.
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.