Comprehensive segmentation of gray matter structures on T1-weighted brain MRI: A Comparative Study of CNN, CNN hybrid-transformer or -Mamba architectures.

Yujia Wei, Jaidip Manikrao Jagtap, Yashbir Singh, Bardia Khosravi, Jason Cai, Jeffrey L Gunter, Bradley J Erickson
{"title":"Comprehensive segmentation of gray matter structures on T1-weighted brain MRI: A Comparative Study of CNN, CNN hybrid-transformer or -Mamba architectures.","authors":"Yujia Wei, Jaidip Manikrao Jagtap, Yashbir Singh, Bardia Khosravi, Jason Cai, Jeffrey L Gunter, Bradley J Erickson","doi":"10.3174/ajnr.A8544","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and purpose: </strong>Recent advances in deep learning have shown promising results in medical image analysis and segmentation. However, most brain MRI segmentation models are limited by the size of their datasets and/or the number of structures they can identify. This study evaluates the performance of six advanced deep learning models in segmenting 122 brain structures from T1-weighted MRI scans, aiming to identify the most effective model for clinical and research applications.</p><p><strong>Materials and methods: </strong>1,510 T1-weighted MRIs were used to compare six deep-learning models for the segmentation of 122 distinct gray matter structures: nnU-Net, SegResNet, SwinUNETR, UNETR, U-Mamba_BOT and U-Mamba_ Enc. Each model was rigorously tested for accuracy using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95). Additionally, the volume of each structure was calculated and compared between normal control (NC) and Alzheimer's Disease (AD) patients.</p><p><strong>Results: </strong>U-Mamba_Bot achieved the highest performance with a median DSC of 0.9112 [IQR:0.8957, 0.9250]. nnU-Net achieved a median DSC of 0.9027 [IQR: 0.8847, 0.9205] and had the highest HD95 of 1.392[IQR: 1.174, 2.029]. The value of each HD95 (<3mm) indicates its superior capability in capturing detailed brain structures accurately. Following segmentation, volume calculations were performed, and the resultant volumes of normal controls and AD patients were compared. The volume changes observed in thirteen brain substructures were all consistent with those reported in existing literature, reinforcing the reliability of the segmentation outputs.</p><p><strong>Conclusions: </strong>This study underscores the efficacy of U-Mamba_Bot as a robust tool for detailed brain structure segmentation in T1-weighted MRI scans. The congruence of our volumetric analysis with the literature further validates the potential of advanced deep-learning models to enhance the understanding of neurodegenerative diseases such as AD. Future research should consider larger datasets to validate these findings further and explore the applicability of these models in other neurological conditions.</p><p><strong>Abbreviations: </strong>AD = Alzheimer's Disease; ADNI = Alzheimer's Disease Neuroimaging Initiative; DSC = Dice Similarity Coefficient; HD95 = the 95th Percentile Hausdorff Distance; IQR = Interquartile Range; NC = Normal Control; SSMs = State-space Sequence Models.</p>","PeriodicalId":93863,"journal":{"name":"AJNR. American journal of neuroradiology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AJNR. American journal of neuroradiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3174/ajnr.A8544","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background and purpose: Recent advances in deep learning have shown promising results in medical image analysis and segmentation. However, most brain MRI segmentation models are limited by the size of their datasets and/or the number of structures they can identify. This study evaluates the performance of six advanced deep learning models in segmenting 122 brain structures from T1-weighted MRI scans, aiming to identify the most effective model for clinical and research applications.

Materials and methods: 1,510 T1-weighted MRIs were used to compare six deep-learning models for the segmentation of 122 distinct gray matter structures: nnU-Net, SegResNet, SwinUNETR, UNETR, U-Mamba_BOT and U-Mamba_ Enc. Each model was rigorously tested for accuracy using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95). Additionally, the volume of each structure was calculated and compared between normal control (NC) and Alzheimer's Disease (AD) patients.

Results: U-Mamba_Bot achieved the highest performance with a median DSC of 0.9112 [IQR:0.8957, 0.9250]. nnU-Net achieved a median DSC of 0.9027 [IQR: 0.8847, 0.9205] and had the highest HD95 of 1.392[IQR: 1.174, 2.029]. The value of each HD95 (<3mm) indicates its superior capability in capturing detailed brain structures accurately. Following segmentation, volume calculations were performed, and the resultant volumes of normal controls and AD patients were compared. The volume changes observed in thirteen brain substructures were all consistent with those reported in existing literature, reinforcing the reliability of the segmentation outputs.

Conclusions: This study underscores the efficacy of U-Mamba_Bot as a robust tool for detailed brain structure segmentation in T1-weighted MRI scans. The congruence of our volumetric analysis with the literature further validates the potential of advanced deep-learning models to enhance the understanding of neurodegenerative diseases such as AD. Future research should consider larger datasets to validate these findings further and explore the applicability of these models in other neurological conditions.

Abbreviations: AD = Alzheimer's Disease; ADNI = Alzheimer's Disease Neuroimaging Initiative; DSC = Dice Similarity Coefficient; HD95 = the 95th Percentile Hausdorff Distance; IQR = Interquartile Range; NC = Normal Control; SSMs = State-space Sequence Models.

T1 加权脑磁共振成像灰质结构的综合分割:CNN、CNN 混合变换器或曼巴架构的比较研究。
背景和目的:最近,深度学习在医学图像分析和分割方面取得了令人鼓舞的进展。然而,大多数脑部核磁共振成像分割模型受限于其数据集的大小和/或可识别结构的数量。本研究评估了六种先进的深度学习模型在从 T1 加权核磁共振成像扫描中分割 122 个大脑结构方面的性能,旨在为临床和研究应用找出最有效的模型。材料与方法:本研究使用了 1510 张 T1 加权核磁共振成像照片,比较了六种深度学习模型在分割 122 个不同灰质结构方面的性能:nnU-Net、SegResNet、SwinUNETR、UNETR、U-Mamba_BOT 和 U-Mamba_Enc。使用骰子相似系数(DSC)和第 95 百分位数豪斯多夫距离(HD95)对每个模型的准确性进行了严格测试。此外,还计算了每个结构的体积,并在正常对照组(NC)和阿尔茨海默病(AD)患者之间进行了比较:nnU-Net 的 DSC 中位数为 0.9027 [IQR: 0.8847, 0.9205],HD95 最高,为 1.392 [IQR: 1.174, 2.029]。各 HD95 值(结论:本研究强调了 U-Mamba_Bot 作为 T1 加权磁共振成像扫描中详细大脑结构分割的强大工具的功效。我们的容积分析与文献的一致性进一步验证了高级深度学习模型在增强对神经退行性疾病(如 AD)的理解方面的潜力。未来的研究应考虑更大的数据集,以进一步验证这些发现,并探索这些模型在其他神经疾病中的适用性:缩写:AD = 阿尔茨海默病;ADNI = 阿尔茨海默病神经影像学倡议;DSC = 骰子相似系数;HD95 = 第 95 百分位数 Hausdorff 距离;IQR = 四分位间范围;NC = 正常对照;SSMs = 状态空间序列模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信