脑海绵体畸形的卷积神经网络自动分割。

IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Chi-Jen Chou, Huai-Che Yang, Cheng-Chia Lee, Zhi-Huan Jiang, Ching-Jen Chen, Hsiu-Mei Wu, Chun-Fu Lin, I-Chun Lai, Syu-Jyun Peng
{"title":"脑海绵体畸形的卷积神经网络自动分割。","authors":"Chi-Jen Chou, Huai-Che Yang, Cheng-Chia Lee, Zhi-Huan Jiang, Ching-Jen Chen, Hsiu-Mei Wu, Chun-Fu Lin, I-Chun Lai, Syu-Jyun Peng","doi":"10.1186/s12880-025-01738-6","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs).</p><p><strong>Methods: </strong>The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic.</p><p><strong>Results: </strong>The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis.</p><p><strong>Conclusions: </strong>This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications.</p><p><strong>Trial registration: </strong>not applicable.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"190"},"PeriodicalIF":2.9000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107882/pdf/","citationCount":"0","resultStr":"{\"title\":\"Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.\",\"authors\":\"Chi-Jen Chou, Huai-Che Yang, Cheng-Chia Lee, Zhi-Huan Jiang, Ching-Jen Chen, Hsiu-Mei Wu, Chun-Fu Lin, I-Chun Lai, Syu-Jyun Peng\",\"doi\":\"10.1186/s12880-025-01738-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs).</p><p><strong>Methods: </strong>The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic.</p><p><strong>Results: </strong>The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis.</p><p><strong>Conclusions: </strong>This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications.</p><p><strong>Trial registration: </strong>not applicable.</p>\",\"PeriodicalId\":9020,\"journal\":{\"name\":\"BMC Medical Imaging\",\"volume\":\"25 1\",\"pages\":\"190\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107882/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12880-025-01738-6\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12880-025-01738-6","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

背景:本文提出了一种用于脑海绵状血管瘤(CCMs)自动分割的深度学习模型。方法:使用199例伽玛刀(GK)检查的治疗计划数据对模型进行训练,其中171例为单一CCM, 28例为多重CCM。训练数据包括由神经外科医生手动注释的目标CCM区域的初始MRI图像。对于脑实质相关数据的提取,我们采用了基于mask区域的卷积神经网络(mask R-CNN)。随后,这些数据被称为DeepMedic的3D卷积神经网络处理。结果:经五重交叉验证,脑实质提取模型的有效性得到验证,平均Dice相似系数为0.956±0.002。仅基于T2W图像,CCMs分割模型的平均Dice相似系数为0.741±0.028。ccm类型分割的Dice相似系数分别为:Zabramski分类I型(0.743)、II型(0.742)、III型(0.740)。我们还开发了一个用户友好的图形用户界面,以方便在临床分析中使用这些模型。结论:本文提出了一种用于ccm自动分割的深度学习模型,在各种Zabramski分类中表现出足够的性能。试验注册:不适用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.

Background: This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs).

Methods: The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic.

Results: The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis.

Conclusions: This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications.

Trial registration: not applicable.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMC Medical Imaging
BMC Medical Imaging RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
4.60
自引率
3.70%
发文量
198
审稿时长
27 weeks
期刊介绍: BMC Medical Imaging is an open access journal publishing original peer-reviewed research articles in the development, evaluation, and use of imaging techniques and image processing tools to diagnose and manage disease.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信