{"title":"Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.","authors":"Chi-Jen Chou, Huai-Che Yang, Cheng-Chia Lee, Zhi-Huan Jiang, Ching-Jen Chen, Hsiu-Mei Wu, Chun-Fu Lin, I-Chun Lai, Syu-Jyun Peng","doi":"10.1186/s12880-025-01738-6","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs).</p><p><strong>Methods: </strong>The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic.</p><p><strong>Results: </strong>The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis.</p><p><strong>Conclusions: </strong>This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications.</p><p><strong>Trial registration: </strong>not applicable.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"25 1","pages":"190"},"PeriodicalIF":2.9000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107882/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12880-025-01738-6","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Background: This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs).
Methods: The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic.
Results: The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis.
Conclusions: This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications.
期刊介绍:
BMC Medical Imaging is an open access journal publishing original peer-reviewed research articles in the development, evaluation, and use of imaging techniques and image processing tools to diagnose and manage disease.