Multi-task glioma segmentation and IDH mutation and 1p19q codeletion classification via a deep learning model on multimodal MRI

Erin Beate Bjørkeli , Morteza Esmaeili
{"title":"Multi-task glioma segmentation and IDH mutation and 1p19q codeletion classification via a deep learning model on multimodal MRI","authors":"Erin Beate Bjørkeli ,&nbsp;Morteza Esmaeili","doi":"10.1016/j.metrad.2025.100152","DOIUrl":null,"url":null,"abstract":"<div><h3>Objectives</h3><div>To develop a deep learning model for simultaneous segmentation of glioma lesions and classification of IDH mutation and 1p/19q codeletion status using multimodal MRI.</div></div><div><h3>Methods</h3><div>We employed a CNN model with Encoder-Decoder architecture for segmentation, followed by fully connected layers for classification. The model was trained and validated using the BraTS 2020 dataset (132 examinations with known molecular status, split 80/20). Four MRI sequences iamges (T1, T1ce, T2, FLAIR) were used for analysis. Segmentation performance was evaluated using mean Dice Score (mDS) and mean Intersection over Union (mIoU). Classification was assessed using accuracy, sensitivity, and specificity.</div></div><div><h3>Results</h3><div>The model achieved the best segmentation performance with all four modalities (mDS validation ​= ​0.73, mIoU validation ​= ​0.62). Among single modalities, FLAIR performed best (mDS validation ​= ​0.56, mIoU validation ​= ​0.44). For classification, the combined four modalities achieved an overall accuracy of 0.98. However, classification precision for IDH and 1p19q was potentially limited by class imbalance.</div></div><div><h3>Conclusion</h3><div>Our CNN-based Encoder-Decoder model demonstrates the benefit of multimodal MRI for accurate glioma segmentation and shows promising results for molecular subtype classification. Future work will focus on addressing class imbalance and exploring feature integration to enhance classification performance.</div></div>","PeriodicalId":100921,"journal":{"name":"Meta-Radiology","volume":"3 2","pages":"Article 100152"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Meta-Radiology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2950162825000207","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives

To develop a deep learning model for simultaneous segmentation of glioma lesions and classification of IDH mutation and 1p/19q codeletion status using multimodal MRI.

Methods

We employed a CNN model with Encoder-Decoder architecture for segmentation, followed by fully connected layers for classification. The model was trained and validated using the BraTS 2020 dataset (132 examinations with known molecular status, split 80/20). Four MRI sequences iamges (T1, T1ce, T2, FLAIR) were used for analysis. Segmentation performance was evaluated using mean Dice Score (mDS) and mean Intersection over Union (mIoU). Classification was assessed using accuracy, sensitivity, and specificity.

Results

The model achieved the best segmentation performance with all four modalities (mDS validation ​= ​0.73, mIoU validation ​= ​0.62). Among single modalities, FLAIR performed best (mDS validation ​= ​0.56, mIoU validation ​= ​0.44). For classification, the combined four modalities achieved an overall accuracy of 0.98. However, classification precision for IDH and 1p19q was potentially limited by class imbalance.

Conclusion

Our CNN-based Encoder-Decoder model demonstrates the benefit of multimodal MRI for accurate glioma segmentation and shows promising results for molecular subtype classification. Future work will focus on addressing class imbalance and exploring feature integration to enhance classification performance.
基于多模态MRI的多任务胶质瘤分割、IDH突变和1p19q编码分类
目的建立一种基于多模态MRI的神经胶质瘤病变同时分割、IDH突变和1p/19q编码状态分类的深度学习模型。方法采用具有编码器-解码器架构的CNN模型进行分割,然后采用全连接层进行分类。该模型使用BraTS 2020数据集(132个已知分子状态的检查,分割80/20)进行训练和验证。4张MRI序列图像(T1, T1ce, T2, FLAIR)进行分析。使用平均Dice Score (mDS)和平均Intersection over Union (mIoU)来评估分割性能。分类采用准确性、敏感性和特异性进行评估。结果该模型在4种模式下均获得了最佳分割效果(mDS验证= 0.73,mIoU验证= 0.62)。在单一模式中,FLAIR表现最好(mDS验证= 0.56,mIoU验证= 0.44)。对于分类,组合四种模式的总体准确率为0.98。然而,IDH和1p19q的分类精度可能受到类别不平衡的限制。结论基于cnn的编码器-解码器模型证明了多模态MRI对胶质瘤精确分割的好处,并在分子亚型分类方面显示出令人鼓舞的结果。未来的工作将集中在解决类别不平衡和探索特征集成以提高分类性能上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信