Jose Dixon, Oluwatunmise Akinniyi, Abeer Abdelhamid, Gehad A. Saleh, M. Rahman, Fahmi Khalifa
{"title":"A Hybrid Learning-Architecture for Improved Brain Tumor Recognition","authors":"Jose Dixon, Oluwatunmise Akinniyi, Abeer Abdelhamid, Gehad A. Saleh, M. Rahman, Fahmi Khalifa","doi":"10.3390/a17060221","DOIUrl":null,"url":null,"abstract":"The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture for improved brain tumor classification. We introduce a hybrid architecture that integrates vision transformer (ViT) and deep neural networks to create an ensemble classifier, resulting in a more robust brain tumor classification framework. The analysis pipeline begins with preprocessing and data normalization, followed by extracting three types of MRI-derived information-rich features. The latter included higher-order texture and structural feature sets to harness the spatial interactions between image intensities, which were derived using Haralick features and local binary patterns. Additionally, local deeper features of the brain images are extracted using an optimized convolutional neural networks (CNN) architecture. Finally, ViT-derived features are also integrated due to their ability to handle dependencies across larger distances while being less sensitive to data augmentation. The extracted features are then weighted, fused, and fed to a machine learning classifier for the final classification of brain MRIs. The proposed weighted ensemble architecture has been evaluated on publicly available and locally collected brain MRIs of four classes using various metrics. The results showed that leveraging the benefits of individual components of the proposed architecture leads to improved performance using ablation studies.","PeriodicalId":502609,"journal":{"name":"Algorithms","volume":"63 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Algorithms","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/a17060221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture for improved brain tumor classification. We introduce a hybrid architecture that integrates vision transformer (ViT) and deep neural networks to create an ensemble classifier, resulting in a more robust brain tumor classification framework. The analysis pipeline begins with preprocessing and data normalization, followed by extracting three types of MRI-derived information-rich features. The latter included higher-order texture and structural feature sets to harness the spatial interactions between image intensities, which were derived using Haralick features and local binary patterns. Additionally, local deeper features of the brain images are extracted using an optimized convolutional neural networks (CNN) architecture. Finally, ViT-derived features are also integrated due to their ability to handle dependencies across larger distances while being less sensitive to data augmentation. The extracted features are then weighted, fused, and fed to a machine learning classifier for the final classification of brain MRIs. The proposed weighted ensemble architecture has been evaluated on publicly available and locally collected brain MRIs of four classes using various metrics. The results showed that leveraging the benefits of individual components of the proposed architecture leads to improved performance using ablation studies.
脑肿瘤的准确分类是早期干预的重要一步。近年来,基于人工智能(AI)的诊断系统已被用于帮助实现这一过程的自动化,并提供更客观、更快速的诊断。这项工作介绍了一种基于人工智能的增强型架构,用于改进脑肿瘤分类。我们引入了一种混合架构,它整合了视觉转换器(ViT)和深度神经网络,创建了一个集合分类器,从而形成了一个更强大的脑肿瘤分类框架。分析流水线从预处理和数据归一化开始,然后提取三种核磁共振成像衍生的信息丰富的特征。后者包括高阶纹理和结构特征集,以利用图像增强之间的空间相互作用,这些特征是利用哈里克特征和局部二元模式得出的。此外,还使用优化的卷积神经网络(CNN)架构提取大脑图像的局部深层特征。最后,还整合了 ViT 衍生特征,因为它们能够处理较大距离上的依赖关系,同时对数据增强的敏感度较低。然后,对提取的特征进行加权、融合,并将其输入机器学习分类器,对脑磁共振成像进行最终分类。我们使用各种指标对所提出的加权集合架构进行了评估,评估对象是公开和本地收集的四类脑部核磁共振成像。结果表明,利用拟议架构中各个组件的优势,可以提高消融研究的性能。