Najmul Hassan;Abu Saleh Musa Miah;Yuichi Okuyama;Jungpil Shin
{"title":"结合深度学习和纹理分析的综合特征融合神经系统疾病识别","authors":"Najmul Hassan;Abu Saleh Musa Miah;Yuichi Okuyama;Jungpil Shin","doi":"10.1109/OJCS.2025.3594701","DOIUrl":null,"url":null,"abstract":"Neurological disorders, including Brain Tumors (BTs), Alzheimer’s Disease (AD), and Parkinson’s Disease (PD), pose significant global health challenges. Early and accurate diagnosis is crucial for effective treatment and improved patient outcomes. Magnetic Resonance Imaging (MRI) is a key diagnostic tool, but traditional Machine Learning (ML) approaches often rely on labor-intensive handcrafted features, leading to inconsistent performance. Recent advancements in Deep Learning (DL) enable automated feature extraction, which offers improved robustness and scalability. However, many existing methods face challenges in fully exploiting the complementary strengths of DL and handcrafted features across multiple disease types. This study proposes a novel hybrid DL model that integrates automated deep features with statistical textural descriptors for the classification of BTs, AD, and PD. The model employs a dual-stream architecture: (1) a modified VGG16 convolutional neural network (CNN), chosen for its favorable trade-off between performance and computational efficiency in medical imaging, to extract deep features from MRI slices, and (2) a sequential one dimensional (1D) CNN to process six gray-level co-occurrenc matrix (GLCM)derived handcrafted features, empirically validated for their superior discriminative power in neuroanatomical texture analysis. By integrating these complementary feature sets, the model leverages global patterns and fine-grained textural details, resulting in a robust and comprehensive representation for accurate and reliable medical image classification. Gradient-weighted class activation mapping (Grad-CAM) is incorporated to enhance interpretability by localizing diagnostically relevant brain regions. The fused features are passed through a fully connected layer for final classification. The proposed model was evaluated on four publicly available MRI datasets, achieving accuracies of 98.86%, 99.50%, 98.52%, and 99.13% on the CE-MRI (multi-class BT), Br35H (binary BT), AD, and PD datasets, respectively. The model achieved an average classification accuracy of 99.05% across the three neurological disorders. Our method outperforms recent state-of-the-art (SOTA) methods, which shows the effectiveness of the proposed model integrating DL and handcrafted features to develop interpretable, robust, and generalizable AI-driven diagnostic systems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1366-1377"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11106739","citationCount":"0","resultStr":"{\"title\":\"Neurological Disorder Recognition via Comprehensive Feature Fusion by Integrating Deep Learning and Texture Analysis\",\"authors\":\"Najmul Hassan;Abu Saleh Musa Miah;Yuichi Okuyama;Jungpil Shin\",\"doi\":\"10.1109/OJCS.2025.3594701\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neurological disorders, including Brain Tumors (BTs), Alzheimer’s Disease (AD), and Parkinson’s Disease (PD), pose significant global health challenges. Early and accurate diagnosis is crucial for effective treatment and improved patient outcomes. Magnetic Resonance Imaging (MRI) is a key diagnostic tool, but traditional Machine Learning (ML) approaches often rely on labor-intensive handcrafted features, leading to inconsistent performance. Recent advancements in Deep Learning (DL) enable automated feature extraction, which offers improved robustness and scalability. However, many existing methods face challenges in fully exploiting the complementary strengths of DL and handcrafted features across multiple disease types. This study proposes a novel hybrid DL model that integrates automated deep features with statistical textural descriptors for the classification of BTs, AD, and PD. The model employs a dual-stream architecture: (1) a modified VGG16 convolutional neural network (CNN), chosen for its favorable trade-off between performance and computational efficiency in medical imaging, to extract deep features from MRI slices, and (2) a sequential one dimensional (1D) CNN to process six gray-level co-occurrenc matrix (GLCM)derived handcrafted features, empirically validated for their superior discriminative power in neuroanatomical texture analysis. By integrating these complementary feature sets, the model leverages global patterns and fine-grained textural details, resulting in a robust and comprehensive representation for accurate and reliable medical image classification. Gradient-weighted class activation mapping (Grad-CAM) is incorporated to enhance interpretability by localizing diagnostically relevant brain regions. The fused features are passed through a fully connected layer for final classification. The proposed model was evaluated on four publicly available MRI datasets, achieving accuracies of 98.86%, 99.50%, 98.52%, and 99.13% on the CE-MRI (multi-class BT), Br35H (binary BT), AD, and PD datasets, respectively. The model achieved an average classification accuracy of 99.05% across the three neurological disorders. Our method outperforms recent state-of-the-art (SOTA) methods, which shows the effectiveness of the proposed model integrating DL and handcrafted features to develop interpretable, robust, and generalizable AI-driven diagnostic systems.\",\"PeriodicalId\":13205,\"journal\":{\"name\":\"IEEE Open Journal of the Computer Society\",\"volume\":\"6 \",\"pages\":\"1366-1377\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11106739\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Computer Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11106739/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11106739/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neurological Disorder Recognition via Comprehensive Feature Fusion by Integrating Deep Learning and Texture Analysis
Neurological disorders, including Brain Tumors (BTs), Alzheimer’s Disease (AD), and Parkinson’s Disease (PD), pose significant global health challenges. Early and accurate diagnosis is crucial for effective treatment and improved patient outcomes. Magnetic Resonance Imaging (MRI) is a key diagnostic tool, but traditional Machine Learning (ML) approaches often rely on labor-intensive handcrafted features, leading to inconsistent performance. Recent advancements in Deep Learning (DL) enable automated feature extraction, which offers improved robustness and scalability. However, many existing methods face challenges in fully exploiting the complementary strengths of DL and handcrafted features across multiple disease types. This study proposes a novel hybrid DL model that integrates automated deep features with statistical textural descriptors for the classification of BTs, AD, and PD. The model employs a dual-stream architecture: (1) a modified VGG16 convolutional neural network (CNN), chosen for its favorable trade-off between performance and computational efficiency in medical imaging, to extract deep features from MRI slices, and (2) a sequential one dimensional (1D) CNN to process six gray-level co-occurrenc matrix (GLCM)derived handcrafted features, empirically validated for their superior discriminative power in neuroanatomical texture analysis. By integrating these complementary feature sets, the model leverages global patterns and fine-grained textural details, resulting in a robust and comprehensive representation for accurate and reliable medical image classification. Gradient-weighted class activation mapping (Grad-CAM) is incorporated to enhance interpretability by localizing diagnostically relevant brain regions. The fused features are passed through a fully connected layer for final classification. The proposed model was evaluated on four publicly available MRI datasets, achieving accuracies of 98.86%, 99.50%, 98.52%, and 99.13% on the CE-MRI (multi-class BT), Br35H (binary BT), AD, and PD datasets, respectively. The model achieved an average classification accuracy of 99.05% across the three neurological disorders. Our method outperforms recent state-of-the-art (SOTA) methods, which shows the effectiveness of the proposed model integrating DL and handcrafted features to develop interpretable, robust, and generalizable AI-driven diagnostic systems.