Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network

IF 2 Q3 NEUROSCIENCES
Fawad Asadi , Thanate Angsuwatanakul , Jamie A. O’Reilly
{"title":"Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network","authors":"Fawad Asadi ,&nbsp;Thanate Angsuwatanakul ,&nbsp;Jamie A. O’Reilly","doi":"10.1016/j.ibneur.2023.12.002","DOIUrl":null,"url":null,"abstract":"<div><p>Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.</p></div>","PeriodicalId":13195,"journal":{"name":"IBRO Neuroscience Reports","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667242123022881/pdfft?md5=834b2f59e099793ec55f5fcf6cf96b9a&pid=1-s2.0-S2667242123022881-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IBRO Neuroscience Reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667242123022881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.

利用深度全卷积网络评估用于自动脑肿瘤分割的合成神经成像数据增强技术
医学影像中观察到的胶质瘤需要神经放射科专家进行评估,以便制定治疗计划和进行监测,这就促使开发能够自动进行肿瘤评估的智能系统。用于自动图像分割的深度学习模型依赖于训练数据的数量和质量。在这项研究中,我们开发了一种神经成像合成技术,以增加用于训练全卷积网络(U-nets)的数据,从而执行自动胶质瘤分割。我们使用 StyleGAN2-ada 同时生成流体衰减反转恢复(FLAIR)磁共振图像和相应的胶质瘤分割掩膜。合成数据以 14 轮 1000 次的方式连续添加到真实训练数据(n = 2751)中,用于训练 U-网络,并在保持验证集(n = 590)和测试集(n = 588)上对 U-网络进行评估。对 U 型网络进行了有几何增强(平移、缩放和剪切)和无几何增强(平移、缩放和剪切)的训练,并计算了 Dice 系数以评估分割性能。我们还监测了停止前的训练迭代次数、总训练时间和每次迭代时间,以评估与训练每个 U 网相关的计算成本。合成数据增强对 Dice 系数的改善微乎其微(验证集 +0.0409,测试集 +0.0355),而几何增强则提高了泛化能力(使用几何增强时,训练集、验证集和测试集性能之间的标准偏差为 0.01,而不使用几何增强时为 0.04)。基于胶质瘤自动分割的适度性能提升,我们发现很难证明开发合成图像生成管道所需的计算费用是合理的。未来的工作可能会寻求优化合成数据生成的效率,以增强神经成像数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IBRO Neuroscience Reports
IBRO Neuroscience Reports Neuroscience-Neuroscience (all)
CiteScore
2.80
自引率
0.00%
发文量
99
审稿时长
14 weeks
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信