Zhen Jia, Tingting Huang, Xianjun Li, Yitong Bian, Fan Wang, Jianmin Yuan, Guanghua Xu, Jian Yang
{"title":"DBAII-Net with multiscale feature aggregation and cross-modal attention for enhancing infant brain injury classification in MRI.","authors":"Zhen Jia, Tingting Huang, Xianjun Li, Yitong Bian, Fan Wang, Jianmin Yuan, Guanghua Xu, Jian Yang","doi":"10.1088/1361-6560/ad80f7","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objectives.</i>Magnetic resonance imaging (MRI) is pivotal in diagnosing brain injuries in infants. However, the dynamic development of the brain introduces variability in infant MRI characteristics, posing challenges for MRI-based classification in this population. Furthermore, manual data selection in large-scale studies is labor-intensive, and existing algorithms often underperform with thick-slice MRI data. To enhance research efficiency and classification accuracy in large datasets, we propose an advanced classification model.<i>Approach.</i>We introduce the Dual-Branch Attention Information Interactive Neural Network (DBAII-Net), a cutting-edge model inspired by radiologists' use of multiple MRI sequences. DBAII-Net features two innovative modules: (1) the convolutional enhancement module (CEM), which leverages advanced convolutional techniques to aggregate multi-scale features, significantly enhancing information representation; and (2) the cross-modal attention module (CMAM), which employs state-of-the-art attention mechanisms to fuse data across branches, dramatically improving positional and channel feature extraction. Performances (accuracy, sensitivity, specificity, area under the curve (AUC), etc) of DBAII-Net were compared with eight benchmark models for brain MRI classification in infants aged 6 months to 2 years.<i>Main results.</i>Utilizing a self-constructed dataset of 240 thick-slice brain MRI scans (122 with brain injuries, 118 without), DBAII-Net demonstrated superior performance. On a test set of approximately 50 cases, DBAII-Net achieved average performance metrics of 92.53% accuracy, 90.20% sensitivity, 94.93% specificity, and an AUC of 0.9603. Ablation studies confirmed the effectiveness of CEM and CMAM, with CMAM significantly boosting classification metrics.<i>Significance.</i>DBAII-Net with CEM and CMAM outperforms existing benchmarks in enhancing the precision of brain MRI classification in infants, significantly reducing manual effort in infant brain research. Our code is available athttps://github.com/jiazhen4585/DBAII-Net.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/ad80f7","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives.Magnetic resonance imaging (MRI) is pivotal in diagnosing brain injuries in infants. However, the dynamic development of the brain introduces variability in infant MRI characteristics, posing challenges for MRI-based classification in this population. Furthermore, manual data selection in large-scale studies is labor-intensive, and existing algorithms often underperform with thick-slice MRI data. To enhance research efficiency and classification accuracy in large datasets, we propose an advanced classification model.Approach.We introduce the Dual-Branch Attention Information Interactive Neural Network (DBAII-Net), a cutting-edge model inspired by radiologists' use of multiple MRI sequences. DBAII-Net features two innovative modules: (1) the convolutional enhancement module (CEM), which leverages advanced convolutional techniques to aggregate multi-scale features, significantly enhancing information representation; and (2) the cross-modal attention module (CMAM), which employs state-of-the-art attention mechanisms to fuse data across branches, dramatically improving positional and channel feature extraction. Performances (accuracy, sensitivity, specificity, area under the curve (AUC), etc) of DBAII-Net were compared with eight benchmark models for brain MRI classification in infants aged 6 months to 2 years.Main results.Utilizing a self-constructed dataset of 240 thick-slice brain MRI scans (122 with brain injuries, 118 without), DBAII-Net demonstrated superior performance. On a test set of approximately 50 cases, DBAII-Net achieved average performance metrics of 92.53% accuracy, 90.20% sensitivity, 94.93% specificity, and an AUC of 0.9603. Ablation studies confirmed the effectiveness of CEM and CMAM, with CMAM significantly boosting classification metrics.Significance.DBAII-Net with CEM and CMAM outperforms existing benchmarks in enhancing the precision of brain MRI classification in infants, significantly reducing manual effort in infant brain research. Our code is available athttps://github.com/jiazhen4585/DBAII-Net.
期刊介绍:
The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry