Evaluating SegResNet for single-modality meningioma segmentation on T1 contrast-enhanced MRI on a New Zealand clinical cohort

Neuroscience informatics Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI:10.1016/j.neuri.2026.100261
Jiantao Shen , Sung-Min Jun , Samantha J. Holdsworth , Gonzalo Maso Talou , Jason A. Correia , Hamid Abbasi
{"title":"Evaluating SegResNet for single-modality meningioma segmentation on T1 contrast-enhanced MRI on a New Zealand clinical cohort","authors":"Jiantao Shen ,&nbsp;Sung-Min Jun ,&nbsp;Samantha J. Holdsworth ,&nbsp;Gonzalo Maso Talou ,&nbsp;Jason A. Correia ,&nbsp;Hamid Abbasi","doi":"10.1016/j.neuri.2026.100261","DOIUrl":null,"url":null,"abstract":"<div><div>Accurate and automated meningioma segmentation remains a biomedical engineering challenge, particularly when relying on single-modality MRI data. We evaluate SegResNet, a U-Net-based deep learning architecture, for meningioma segmentation using 817 T1 contrast-enhanced (T1CE) magnetic resonance imaging (MRI) images from 282 patients across Auckland, New Zealand. We investigate the effect of incorporating additional images from the 2023 Brain Tumor Segmentation (BraTS) meningioma challenge during training on model performance. The baseline model trained solely on the Auckland dataset achieved 75.67 % mean Dice. Incorporating an additional 200 and 400 BraTS images improved segmentation performance to 77.89 % and 76.73 %, respectively. A separate experiment involving pre-training on BraTS data followed by fine-tuning on Auckland data achieved 75.90 % Dice. Our results suggest that while leveraging external datasets can enhance model robustness, the extent of improvement depends on dataset heterogeneity and alignment with the target domain.</div><div>Analysis of a subset of images unaffected by skull-stripping artifacts indicated notably higher segmentation accuracy (up to 84.02 % Dice), highlighting the influence of preprocessing on performance. Evaluations using the 2023 and 2024 BraTS lesion-wise metrics demonstrated the importance of context-appropriate metric selection. Our findings highlight the adaptability of SegResNet to a single-modality T1CE – a widely available sequence in standard clinical protocols – clinical dataset and emphasize how public data integration, careful preprocessing, and task-aligned evaluation can support robust segmentation models for diverse and resource-constrained environments.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"6 1","pages":"Article 100261"},"PeriodicalIF":0.0000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuroscience informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772528626000051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/13 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate and automated meningioma segmentation remains a biomedical engineering challenge, particularly when relying on single-modality MRI data. We evaluate SegResNet, a U-Net-based deep learning architecture, for meningioma segmentation using 817 T1 contrast-enhanced (T1CE) magnetic resonance imaging (MRI) images from 282 patients across Auckland, New Zealand. We investigate the effect of incorporating additional images from the 2023 Brain Tumor Segmentation (BraTS) meningioma challenge during training on model performance. The baseline model trained solely on the Auckland dataset achieved 75.67 % mean Dice. Incorporating an additional 200 and 400 BraTS images improved segmentation performance to 77.89 % and 76.73 %, respectively. A separate experiment involving pre-training on BraTS data followed by fine-tuning on Auckland data achieved 75.90 % Dice. Our results suggest that while leveraging external datasets can enhance model robustness, the extent of improvement depends on dataset heterogeneity and alignment with the target domain.
Analysis of a subset of images unaffected by skull-stripping artifacts indicated notably higher segmentation accuracy (up to 84.02 % Dice), highlighting the influence of preprocessing on performance. Evaluations using the 2023 and 2024 BraTS lesion-wise metrics demonstrated the importance of context-appropriate metric selection. Our findings highlight the adaptability of SegResNet to a single-modality T1CE – a widely available sequence in standard clinical protocols – clinical dataset and emphasize how public data integration, careful preprocessing, and task-aligned evaluation can support robust segmentation models for diverse and resource-constrained environments.

Abstract Image

评估SegResNet在新西兰临床队列T1增强MRI上单模态脑膜瘤分割的效果
准确和自动化的脑膜瘤分割仍然是生物医学工程的挑战,特别是当依赖单模态MRI数据时。我们使用来自新西兰奥克兰282名患者的817 T1对比增强(T1CE)磁共振成像(MRI)图像,对基于u - net的深度学习架构SegResNet进行脑膜瘤分割评估。我们研究了在训练中加入来自2023脑肿瘤分割(BraTS)脑膜瘤挑战的额外图像对模型性能的影响。仅在奥克兰数据集上训练的基线模型达到了平均骰子的75.67%。结合额外的200和400 BraTS图像,分割性能分别提高到77.89%和76.73%。另一项单独的实验涉及对BraTS数据进行预训练,然后对奥克兰数据进行微调,获得了75.90%的Dice。我们的研究结果表明,虽然利用外部数据集可以增强模型的鲁棒性,但改进的程度取决于数据集的异质性和与目标域的一致性。对未受头骨剥离伪影影响的图像子集的分析表明,分割精度显著提高(高达84.02% Dice),突出了预处理对性能的影响。使用2023年和2024年BraTS的横向指标进行评估,证明了选择适合环境的指标的重要性。我们的研究结果强调了SegResNet对单模态T1CE(标准临床协议中广泛使用的序列)的适应性,并强调了公共数据集成、仔细预处理和任务对齐评估如何支持多样化和资源受限环境下的稳健分割模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neuroscience informatics
Neuroscience informatics Surgery, Radiology and Imaging, Information Systems, Neurology, Artificial Intelligence, Computer Science Applications, Signal Processing, Critical Care and Intensive Care Medicine, Health Informatics, Clinical Neurology, Pathology and Medical Technology
自引率
0.00%
发文量
0
审稿时长
57 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书