Changxiong Xie, Jianming Ye, Xiaofei Ma, Leshui Dong, Guohua Zhao, Jingliang Cheng, Guang Yang, Xiaobo Lai
{"title":"在多模态磁共振成像数据中自动分割脑胶质瘤","authors":"Changxiong Xie, Jianming Ye, Xiaofei Ma, Leshui Dong, Guohua Zhao, Jingliang Cheng, Guang Yang, Xiaobo Lai","doi":"10.1002/ima.23128","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Brain gliomas, common in adults, pose significant diagnostic challenges. Accurate segmentation from multimodal magnetic resonance imaging (MRI) scans is critical for effective treatment planning. Traditional manual segmentation methods, labor-intensive and error-prone, often lead to inconsistent diagnoses. To overcome these limitations, our study presents a sophisticated framework for the automated segmentation of brain gliomas from multimodal MRI images. This framework consists of three integral components: a 3D UNet, a classifier, and a Classifier Weight Transformer (CWT). The 3D UNet, acting as both an encoder and decoder, is instrumental in extracting comprehensive features from MRI scans. The classifier, employing a streamlined 1 × 1 convolutional architecture, performs detailed pixel-wise classification. The CWT integrates self-attention mechanisms through three linear layers, a multihead attention module, and layer normalization, dynamically refining the classifier's parameters based on the features extracted by the 3D UNet, thereby improving segmentation accuracy. Our model underwent a two-stage training process for maximum efficiency: in the first stage, supervised learning was used to pre-train the encoder and decoder, focusing on robust feature representation. In the second stage, meta-training was applied to the classifier, with the encoder and decoder remaining unchanged, ensuring precise fine-tuning based on the initially developed features. Extensive evaluation of datasets such as BraTS2019, BraTS2020, BraTS2021, and a specialized private dataset (ZZU) underscored the robustness and clinical potential of our framework, highlighting its superiority and competitive advantage over several state-of-the-art approaches across various segmentation metrics in training and validation sets.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automated Segmentation of Brain Gliomas in Multimodal MRI Data\",\"authors\":\"Changxiong Xie, Jianming Ye, Xiaofei Ma, Leshui Dong, Guohua Zhao, Jingliang Cheng, Guang Yang, Xiaobo Lai\",\"doi\":\"10.1002/ima.23128\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Brain gliomas, common in adults, pose significant diagnostic challenges. Accurate segmentation from multimodal magnetic resonance imaging (MRI) scans is critical for effective treatment planning. Traditional manual segmentation methods, labor-intensive and error-prone, often lead to inconsistent diagnoses. To overcome these limitations, our study presents a sophisticated framework for the automated segmentation of brain gliomas from multimodal MRI images. This framework consists of three integral components: a 3D UNet, a classifier, and a Classifier Weight Transformer (CWT). The 3D UNet, acting as both an encoder and decoder, is instrumental in extracting comprehensive features from MRI scans. The classifier, employing a streamlined 1 × 1 convolutional architecture, performs detailed pixel-wise classification. The CWT integrates self-attention mechanisms through three linear layers, a multihead attention module, and layer normalization, dynamically refining the classifier's parameters based on the features extracted by the 3D UNet, thereby improving segmentation accuracy. Our model underwent a two-stage training process for maximum efficiency: in the first stage, supervised learning was used to pre-train the encoder and decoder, focusing on robust feature representation. In the second stage, meta-training was applied to the classifier, with the encoder and decoder remaining unchanged, ensuring precise fine-tuning based on the initially developed features. Extensive evaluation of datasets such as BraTS2019, BraTS2020, BraTS2021, and a specialized private dataset (ZZU) underscored the robustness and clinical potential of our framework, highlighting its superiority and competitive advantage over several state-of-the-art approaches across various segmentation metrics in training and validation sets.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 4\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23128\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23128","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Automated Segmentation of Brain Gliomas in Multimodal MRI Data
Brain gliomas, common in adults, pose significant diagnostic challenges. Accurate segmentation from multimodal magnetic resonance imaging (MRI) scans is critical for effective treatment planning. Traditional manual segmentation methods, labor-intensive and error-prone, often lead to inconsistent diagnoses. To overcome these limitations, our study presents a sophisticated framework for the automated segmentation of brain gliomas from multimodal MRI images. This framework consists of three integral components: a 3D UNet, a classifier, and a Classifier Weight Transformer (CWT). The 3D UNet, acting as both an encoder and decoder, is instrumental in extracting comprehensive features from MRI scans. The classifier, employing a streamlined 1 × 1 convolutional architecture, performs detailed pixel-wise classification. The CWT integrates self-attention mechanisms through three linear layers, a multihead attention module, and layer normalization, dynamically refining the classifier's parameters based on the features extracted by the 3D UNet, thereby improving segmentation accuracy. Our model underwent a two-stage training process for maximum efficiency: in the first stage, supervised learning was used to pre-train the encoder and decoder, focusing on robust feature representation. In the second stage, meta-training was applied to the classifier, with the encoder and decoder remaining unchanged, ensuring precise fine-tuning based on the initially developed features. Extensive evaluation of datasets such as BraTS2019, BraTS2020, BraTS2021, and a specialized private dataset (ZZU) underscored the robustness and clinical potential of our framework, highlighting its superiority and competitive advantage over several state-of-the-art approaches across various segmentation metrics in training and validation sets.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.