Artificial Computed Tomography Images with Progressively Growing Generative Adversarial Network

Fawad Asadi, Jamie A. O’Reilly
{"title":"Artificial Computed Tomography Images with Progressively Growing Generative Adversarial Network","authors":"Fawad Asadi, Jamie A. O’Reilly","doi":"10.1109/BMEiCON53485.2021.9745251","DOIUrl":null,"url":null,"abstract":"Applications of artificial intelligence in medical imaging include classification, segmentation, and treatment planning. Using current deep-learning techniques, developing these systems requires large amounts of labelled training data. Obtaining this data is challenging due to costs, required expertise, inconsistency of imaging procedures and formatting, and patient privacy concerns. Generative adversarial networks (GANs) may alleviate some of these issues by supplying realistic artificial medical images. In this study, we trained progressively growing (PG)GAN to synthesize full-sized computed tomography (CT) images and succeeded. Performance of the PGGAN was evaluated using Fréchet Inception Distance (FID), Inception Score (IS), and Precision (P) and Recall (R). These metrics were calculated for generated, training, and validation images. The influence of dataset size was explored by varying the number of samples used to calculate each metric; this affected FID, P, and R, but not IS, which has obvious implications for comparing studies. The FID between artificial CT images from PGGAN and real validation images was 42; interestingly, FID between real training and validation images was 24. This suggests that a further reduction of 18 could be achieved by improving the generative model. Overall, artificial CT images generated by PGGAN were almost indistinguishable from real images to the human eye, although computational metrics could identify differences between them. In future work, GANs may be deployed to augment data for training medical AI systems.","PeriodicalId":380002,"journal":{"name":"2021 13th Biomedical Engineering International Conference (BMEiCON)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 13th Biomedical Engineering International Conference (BMEiCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BMEiCON53485.2021.9745251","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Applications of artificial intelligence in medical imaging include classification, segmentation, and treatment planning. Using current deep-learning techniques, developing these systems requires large amounts of labelled training data. Obtaining this data is challenging due to costs, required expertise, inconsistency of imaging procedures and formatting, and patient privacy concerns. Generative adversarial networks (GANs) may alleviate some of these issues by supplying realistic artificial medical images. In this study, we trained progressively growing (PG)GAN to synthesize full-sized computed tomography (CT) images and succeeded. Performance of the PGGAN was evaluated using Fréchet Inception Distance (FID), Inception Score (IS), and Precision (P) and Recall (R). These metrics were calculated for generated, training, and validation images. The influence of dataset size was explored by varying the number of samples used to calculate each metric; this affected FID, P, and R, but not IS, which has obvious implications for comparing studies. The FID between artificial CT images from PGGAN and real validation images was 42; interestingly, FID between real training and validation images was 24. This suggests that a further reduction of 18 could be achieved by improving the generative model. Overall, artificial CT images generated by PGGAN were almost indistinguishable from real images to the human eye, although computational metrics could identify differences between them. In future work, GANs may be deployed to augment data for training medical AI systems.
基于渐进式生成对抗网络的人工计算机断层图像
人工智能在医学成像中的应用包括分类、分割和治疗计划。使用当前的深度学习技术,开发这些系统需要大量的标记训练数据。由于成本、所需专业知识、成像程序和格式不一致以及患者隐私问题,获取这些数据具有挑战性。生成对抗网络(GANs)可以通过提供逼真的人工医学图像来缓解这些问题。在这项研究中,我们训练了渐进式生长(PG)GAN来合成全尺寸计算机断层扫描(CT)图像,并取得了成功。PGGAN的性能使用fr起始距离(FID)、起始分数(IS)、精度(P)和召回率(R)进行评估。这些指标是为生成、训练和验证图像计算的。通过改变用于计算每个指标的样本数量来探索数据集大小的影响;这影响FID、P和R,但不影响IS,这对比较研究有明显的意义。PGGAN人工CT图像与真实验证图像的FID为42;有趣的是,真实训练图像和验证图像之间的FID为24。这表明进一步减少18可以通过改进生成模型来实现。总体而言,PGGAN生成的人工CT图像与人眼几乎无法区分,尽管计算指标可以识别它们之间的差异。在未来的工作中,可能会部署gan来增强训练医疗人工智能系统的数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信