Deep-Net: Fine-Tuned Deep Neural Network Multi-Features Fusion for Brain Tumor Recognition

Muhammad Attique Khan, Reham R. Mostafa, Yu-Dong Zhang, Jamel Baili, Majed Alhaisoni, Usman Tariq, Junaid Ali Khan, Ye Jin Kim, Jaehyuk Cha
{"title":"Deep-Net: Fine-Tuned Deep Neural Network Multi-Features Fusion for Brain Tumor Recognition","authors":"Muhammad Attique Khan, Reham R. Mostafa, Yu-Dong Zhang, Jamel Baili, Majed Alhaisoni, Usman Tariq, Junaid Ali Khan, Ye Jin Kim, Jaehyuk Cha","doi":"10.32604/cmc.2023.038838","DOIUrl":null,"url":null,"abstract":"Manual diagnosis of brain tumors using magnetic resonance images (MRI) is a hectic process and time-consuming. Also, it always requires an expert person for the diagnosis. Therefore, many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature. This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm. NasNet-Mobile, a pre-trained deep learning model, has been fine-tuned and two-way trained on original and enhanced MRI images. The haze-convolutional neural network (haze-CNN) approach is developed and employed on the original images for contrast enhancement. Next, transfer learning (TL) is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer. Then, using a multiset canonical correlation analysis (CCA) method, features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification. Although the information was increased, computational time also jumped. This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features. The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8% and 95.7%, respectively. The proposed method is compared with several recent studies and outperformed in accuracy. In addition, we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.","PeriodicalId":93535,"journal":{"name":"Computers, materials & continua","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers, materials & continua","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32604/cmc.2023.038838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Manual diagnosis of brain tumors using magnetic resonance images (MRI) is a hectic process and time-consuming. Also, it always requires an expert person for the diagnosis. Therefore, many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature. This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm. NasNet-Mobile, a pre-trained deep learning model, has been fine-tuned and two-way trained on original and enhanced MRI images. The haze-convolutional neural network (haze-CNN) approach is developed and employed on the original images for contrast enhancement. Next, transfer learning (TL) is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer. Then, using a multiset canonical correlation analysis (CCA) method, features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification. Although the information was increased, computational time also jumped. This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features. The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8% and 95.7%, respectively. The proposed method is compared with several recent studies and outperformed in accuracy. In addition, we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.
深度网络:用于脑肿瘤识别的微调深度神经网络多特征融合
使用磁共振图像(MRI)进行脑肿瘤的人工诊断是一个繁忙且耗时的过程。此外,它总是需要一个专家来诊断。因此,文献中介绍了许多计算机控制的脑肿瘤诊断和分类方法。提出了一种基于双向深度学习特征提取和混合特征优化算法的新型多模态脑肿瘤分类框架。NasNet-Mobile是一个预先训练好的深度学习模型,已经在原始和增强的MRI图像上进行了微调和双向训练。提出了模糊卷积神经网络(haze-CNN)方法,并将其应用于原始图像的对比度增强。其次,利用迁移学习(TL)训练双向微调模型,并从全局平均池化层提取特征向量。然后,使用多集典型相关分析(CCA)方法,将两个深度学习模型的特征融合到单个特征矩阵中,该技术旨在增强特征方面的信息,以便更好地分类。虽然信息量增加了,但计算时间也增加了。使用混合特征优化算法选择最佳分类特征来解决这个问题。实验是在两个公开可用的数据集——brats2018和brats2019上进行的,准确率分别为94.8%和95.7%。将该方法与最近的一些研究结果进行了比较,结果表明该方法在准确性上更胜一筹。此外,我们分析了所提出方法的每个中间步骤的性能,并发现选择技术加强了所提出的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信