C Wu, W Zhong, J Xie, R Yang, Y Wu, Y Xu, L Wang, X Zhen
{"title":"[基于序列缺失的磁共振成像多序列特征归因和融合互助模型,用于区分高级别和低级别胶质瘤]。","authors":"C Wu, W Zhong, J Xie, R Yang, Y Wu, Y Xu, L Wang, X Zhen","doi":"10.12122/j.issn.1673-4254.2024.08.15","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>To evaluate the performance of magnetic resonance imaging (MRI) multi-sequence feature imputation and fusion mutual model based on sequence deletion in differentiating high-grade glioma (HGG) from low-grade glioma (LGG).</p><p><strong>Methods: </strong>We retrospectively collected multi-sequence MR images from 305 glioma patients, including 189 HGG patients and 116 LGG patients. The region of interest (ROI) of T1-weighted images (T1WI), T2-weighted images (T2WI), T2 fluid attenuated inversion recovery (T2_FLAIR) and post-contrast enhancement T1WI (CE_T1WI) were delineated to extract the radiomics features. A mutual-aid model of MRI multi-sequence feature imputation and fusion based on sequence deletion was used for imputation and fusion of the feature matrix with missing data. The discriminative ability of the model was evaluated using 5-fold cross-validation method and by assessing the accuracy, balanced accuracy, area under the ROC curve (AUC), specificity, and sensitivity. The proposed model was quantitatively compared with other non-holonomic multimodal classification models for discriminating HGG and LGG. Class separability experiments were performed on the latent features learned by the proposed feature imputation and fusion methods to observe the classification effect of the samples in twodimensional plane. Convergence experiments were used to verify the feasibility of the model.</p><p><strong>Results: </strong>For differentiation of HGG from LGG with a missing rate of 10%, the proposed model achieved accuracy, balanced accuracy, AUC, specificity, and sensitivity of 0.777, 0.768, 0.826, 0.754 and 0.780, respectively. The fused latent features showed excellent performance in the class separability experiment, and the algorithm could be iterated to convergence with superior classification performance over other methods at the missing rates of 30% and 50%.</p><p><strong>Conclusion: </strong>The proposed model has excellent performance in classification task of HGG and LGG and outperforms other non-holonomic multimodal classification models, demonstrating its potential for efficient processing of non-holonomic multimodal data.</p>","PeriodicalId":18962,"journal":{"name":"Nan fang yi ke da xue xue bao = Journal of Southern Medical University","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378041/pdf/","citationCount":"0","resultStr":"{\"title\":\"[An MRI multi-sequence feature imputation and fusion mutual-aid model based on sequence deletion for differentiation of high-grade from low-grade glioma].\",\"authors\":\"C Wu, W Zhong, J Xie, R Yang, Y Wu, Y Xu, L Wang, X Zhen\",\"doi\":\"10.12122/j.issn.1673-4254.2024.08.15\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>To evaluate the performance of magnetic resonance imaging (MRI) multi-sequence feature imputation and fusion mutual model based on sequence deletion in differentiating high-grade glioma (HGG) from low-grade glioma (LGG).</p><p><strong>Methods: </strong>We retrospectively collected multi-sequence MR images from 305 glioma patients, including 189 HGG patients and 116 LGG patients. The region of interest (ROI) of T1-weighted images (T1WI), T2-weighted images (T2WI), T2 fluid attenuated inversion recovery (T2_FLAIR) and post-contrast enhancement T1WI (CE_T1WI) were delineated to extract the radiomics features. A mutual-aid model of MRI multi-sequence feature imputation and fusion based on sequence deletion was used for imputation and fusion of the feature matrix with missing data. The discriminative ability of the model was evaluated using 5-fold cross-validation method and by assessing the accuracy, balanced accuracy, area under the ROC curve (AUC), specificity, and sensitivity. The proposed model was quantitatively compared with other non-holonomic multimodal classification models for discriminating HGG and LGG. Class separability experiments were performed on the latent features learned by the proposed feature imputation and fusion methods to observe the classification effect of the samples in twodimensional plane. Convergence experiments were used to verify the feasibility of the model.</p><p><strong>Results: </strong>For differentiation of HGG from LGG with a missing rate of 10%, the proposed model achieved accuracy, balanced accuracy, AUC, specificity, and sensitivity of 0.777, 0.768, 0.826, 0.754 and 0.780, respectively. The fused latent features showed excellent performance in the class separability experiment, and the algorithm could be iterated to convergence with superior classification performance over other methods at the missing rates of 30% and 50%.</p><p><strong>Conclusion: </strong>The proposed model has excellent performance in classification task of HGG and LGG and outperforms other non-holonomic multimodal classification models, demonstrating its potential for efficient processing of non-holonomic multimodal data.</p>\",\"PeriodicalId\":18962,\"journal\":{\"name\":\"Nan fang yi ke da xue xue bao = Journal of Southern Medical University\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378041/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nan fang yi ke da xue xue bao = Journal of Southern Medical University\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12122/j.issn.1673-4254.2024.08.15\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nan fang yi ke da xue xue bao = Journal of Southern Medical University","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12122/j.issn.1673-4254.2024.08.15","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
[An MRI multi-sequence feature imputation and fusion mutual-aid model based on sequence deletion for differentiation of high-grade from low-grade glioma].
Objective: To evaluate the performance of magnetic resonance imaging (MRI) multi-sequence feature imputation and fusion mutual model based on sequence deletion in differentiating high-grade glioma (HGG) from low-grade glioma (LGG).
Methods: We retrospectively collected multi-sequence MR images from 305 glioma patients, including 189 HGG patients and 116 LGG patients. The region of interest (ROI) of T1-weighted images (T1WI), T2-weighted images (T2WI), T2 fluid attenuated inversion recovery (T2_FLAIR) and post-contrast enhancement T1WI (CE_T1WI) were delineated to extract the radiomics features. A mutual-aid model of MRI multi-sequence feature imputation and fusion based on sequence deletion was used for imputation and fusion of the feature matrix with missing data. The discriminative ability of the model was evaluated using 5-fold cross-validation method and by assessing the accuracy, balanced accuracy, area under the ROC curve (AUC), specificity, and sensitivity. The proposed model was quantitatively compared with other non-holonomic multimodal classification models for discriminating HGG and LGG. Class separability experiments were performed on the latent features learned by the proposed feature imputation and fusion methods to observe the classification effect of the samples in twodimensional plane. Convergence experiments were used to verify the feasibility of the model.
Results: For differentiation of HGG from LGG with a missing rate of 10%, the proposed model achieved accuracy, balanced accuracy, AUC, specificity, and sensitivity of 0.777, 0.768, 0.826, 0.754 and 0.780, respectively. The fused latent features showed excellent performance in the class separability experiment, and the algorithm could be iterated to convergence with superior classification performance over other methods at the missing rates of 30% and 50%.
Conclusion: The proposed model has excellent performance in classification task of HGG and LGG and outperforms other non-holonomic multimodal classification models, demonstrating its potential for efficient processing of non-holonomic multimodal data.