Near-pair patch generative adversarial network for data augmentation of focal pathology object detection models.

IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Journal of Medical Imaging Pub Date : 2024-05-01 Epub Date: 2024-06-04 DOI:10.1117/1.JMI.11.3.034505
Ethan Tu, Jonathan Burkow, Andy Tsai, Joseph Junewick, Francisco A Perez, Jeffrey Otjen, Adam M Alessio
{"title":"Near-pair patch generative adversarial network for data augmentation of focal pathology object detection models.","authors":"Ethan Tu, Jonathan Burkow, Andy Tsai, Joseph Junewick, Francisco A Perez, Jeffrey Otjen, Adam M Alessio","doi":"10.1117/1.JMI.11.3.034505","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>The limited volume of medical training data remains one of the leading challenges for machine learning for diagnostic applications. Object detectors that identify and localize pathologies require training with a large volume of labeled images, which are often expensive and time-consuming to curate. To reduce this challenge, we present a method to support distant supervision of object detectors through generation of synthetic pathology-present labeled images.</p><p><strong>Approach: </strong>Our method employs the previously proposed cyclic generative adversarial network (cycleGAN) with two key innovations: (1) use of \"near-pair\" pathology-present regions and pathology-absent regions from similar locations in the same subject for training and (2) the addition of a realism metric (Fréchet inception distance) to the generator loss term. We trained and tested this method with 2800 fracture-present and 2800 fracture-absent image patches from 704 unique pediatric chest radiographs. The trained model was then used to generate synthetic pathology-present images with exact knowledge of location (labels) of the pathology. These synthetic images provided an augmented training set for an object detector.</p><p><strong>Results: </strong>In an observer study, four pediatric radiologists used a five-point Likert scale indicating the likelihood of a real fracture (1 = definitely not a fracture and 5 = definitely a fracture) to grade a set of real fracture-absent, real fracture-present, and synthetic fracture-present images. The real fracture-absent images scored <math><mrow><mn>1.7</mn><mo>±</mo><mn>1.0</mn></mrow></math>, real fracture-present images <math><mrow><mn>4.1</mn><mo>±</mo><mn>1.2</mn></mrow></math>, and synthetic fracture-present images <math><mrow><mn>2.5</mn><mo>±</mo><mn>1.2</mn></mrow></math>. An object detector model (YOLOv5) trained on a mix of 500 real and 500 synthetic radiographs performed with a recall of <math><mrow><mn>0.57</mn><mo>±</mo><mn>0.05</mn></mrow></math> and an <math><mrow><mi>F</mi><mn>2</mn></mrow></math> score of <math><mrow><mn>0.59</mn><mo>±</mo><mn>0.05</mn></mrow></math>. In comparison, when trained on only 500 real radiographs, the recall and <math><mrow><mi>F</mi><mn>2</mn></mrow></math> score were <math><mrow><mn>0.49</mn><mo>±</mo><mn>0.06</mn></mrow></math> and <math><mrow><mn>0.53</mn><mo>±</mo><mn>0.06</mn></mrow></math>, respectively.</p><p><strong>Conclusions: </strong>Our proposed method generates visually realistic pathology and that provided improved object detector performance for the task of rib fracture detection.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 3","pages":"034505"},"PeriodicalIF":1.9000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11149891/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.11.3.034505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/4 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: The limited volume of medical training data remains one of the leading challenges for machine learning for diagnostic applications. Object detectors that identify and localize pathologies require training with a large volume of labeled images, which are often expensive and time-consuming to curate. To reduce this challenge, we present a method to support distant supervision of object detectors through generation of synthetic pathology-present labeled images.

Approach: Our method employs the previously proposed cyclic generative adversarial network (cycleGAN) with two key innovations: (1) use of "near-pair" pathology-present regions and pathology-absent regions from similar locations in the same subject for training and (2) the addition of a realism metric (Fréchet inception distance) to the generator loss term. We trained and tested this method with 2800 fracture-present and 2800 fracture-absent image patches from 704 unique pediatric chest radiographs. The trained model was then used to generate synthetic pathology-present images with exact knowledge of location (labels) of the pathology. These synthetic images provided an augmented training set for an object detector.

Results: In an observer study, four pediatric radiologists used a five-point Likert scale indicating the likelihood of a real fracture (1 = definitely not a fracture and 5 = definitely a fracture) to grade a set of real fracture-absent, real fracture-present, and synthetic fracture-present images. The real fracture-absent images scored 1.7±1.0, real fracture-present images 4.1±1.2, and synthetic fracture-present images 2.5±1.2. An object detector model (YOLOv5) trained on a mix of 500 real and 500 synthetic radiographs performed with a recall of 0.57±0.05 and an F2 score of 0.59±0.05. In comparison, when trained on only 500 real radiographs, the recall and F2 score were 0.49±0.06 and 0.53±0.06, respectively.

Conclusions: Our proposed method generates visually realistic pathology and that provided improved object detector performance for the task of rib fracture detection.

用于病灶病理对象检测模型数据增强的近对补丁生成对抗网络。
目的:医学训练数据量有限仍是机器学习诊断应用面临的主要挑战之一。识别和定位病理的目标检测器需要使用大量标注图像进行训练,而这些标注图像的整理通常既昂贵又耗时。为了减少这一挑战,我们提出了一种方法,通过生成合成病理标注图像来支持对物体检测器的远距离监督:我们的方法采用了之前提出的循环生成对抗网络(cycleGAN),并在此基础上进行了两大创新:(1) 使用来自同一研究对象相似位置的 "近对 "病理存在区域和病理不存在区域进行训练;(2) 在生成器损失项中添加了一个现实度量指标(弗雷谢特起始距离)。我们使用 704 张独特的儿科胸片中的 2800 个骨折存在和 2800 个骨折不存在的图像片段对该方法进行了训练和测试。然后,利用训练好的模型生成合成的病理存在图像,并准确了解病理的位置(标签)。这些合成图像为物体检测器提供了一个增强的训练集:在一项观察研究中,四位儿科放射科医生使用五点李克特量表(1 = 绝对不是骨折,5 = 绝对是骨折)对一组无真实骨折、有真实骨折和有合成骨折的图像进行评分,以显示真实骨折的可能性。真实无骨折图像的评分为 1.7±1.0,真实有骨折图像的评分为 4.1±1.2,合成有骨折图像的评分为 2.5±1.2。对象检测器模型(YOLOv5)是在 500 张真实和 500 张合成射线照片的混合图像上训练出来的,其召回率为 0.57±0.05,F2 得分为 0.59±0.05。相比之下,如果只在 500 张真实放射照片上进行训练,召回率和 F2 得分分别为 0.49±0.06 和 0.53±0.06:我们提出的方法能生成视觉上逼真的病理图像,并提高了肋骨骨折检测任务的目标检测器性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Imaging
Journal of Medical Imaging RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
4.10
自引率
4.20%
发文量
0
期刊介绍: JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信