A deep learning approach for contrast-agent-free breast lesion detection and classification using adversarial synthesis of contrast-enhanced mammograms

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Manar N. Amin , Muhammad A. Rushdi , Rasha Kamal , Amr Farouk , Mohamed Gomaa , Noha M. Fouad , Ahmed M. Mahmoud
{"title":"A deep learning approach for contrast-agent-free breast lesion detection and classification using adversarial synthesis of contrast-enhanced mammograms","authors":"Manar N. Amin ,&nbsp;Muhammad A. Rushdi ,&nbsp;Rasha Kamal ,&nbsp;Amr Farouk ,&nbsp;Mohamed Gomaa ,&nbsp;Noha M. Fouad ,&nbsp;Ahmed M. Mahmoud","doi":"10.1016/j.imavis.2025.105692","DOIUrl":null,"url":null,"abstract":"<div><div>Contrast-enhanced digital mammography (CEDM) has emerged as a promising complementary imaging modality for breast cancer diagnosis, offering enhanced lesion visualization and improved diagnostic accuracy, particularly for patients with dense breast tissues. However, the reliance of CEDM on contrast agents poses challenges to patient safety and accessibility. To overcome those challenges, this paper introduces a deep learning methodology for improved breast lesion detection and classification. In particular, an image-to-image translation model based on cycle-consistent generative adversarial networks (CycleGAN) is utilized to generate synthetic CEDM (SynCEDM) images from full-field digital mammography in order to enhance visual contrast perception without the need for contrast agents. A new dataset of 3958 pairs of low-energy (LE) and CEDM images was collected from 2908 female subjects to train the CycleGAN model to generate SynCEDM images. Thus, we trained different You-Only-Look-Once (YOLO) architectures on CEDM and SynCEDM images for breast lesion detection and classification. SynCEDM images were generated with a structural similarity index (SSIM) of 0.94 ± 0.02. A YOLO lesion detector trained on original CEDM images led to a 91.34% accuracy, a 90.37% sensitivity, and a 92.06% specificity. In comparison, a detector trained on the SynCEDM images exhibited a comparable accuracy of 91.20%, a marginally higher sensitivity of 91.44%, and a slightly lower specificity of 91.30%. This approach not only aims to mitigate contrast agent risks but also to improve breast cancer detection and characterization using mammography.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105692"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026288562500280X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Contrast-enhanced digital mammography (CEDM) has emerged as a promising complementary imaging modality for breast cancer diagnosis, offering enhanced lesion visualization and improved diagnostic accuracy, particularly for patients with dense breast tissues. However, the reliance of CEDM on contrast agents poses challenges to patient safety and accessibility. To overcome those challenges, this paper introduces a deep learning methodology for improved breast lesion detection and classification. In particular, an image-to-image translation model based on cycle-consistent generative adversarial networks (CycleGAN) is utilized to generate synthetic CEDM (SynCEDM) images from full-field digital mammography in order to enhance visual contrast perception without the need for contrast agents. A new dataset of 3958 pairs of low-energy (LE) and CEDM images was collected from 2908 female subjects to train the CycleGAN model to generate SynCEDM images. Thus, we trained different You-Only-Look-Once (YOLO) architectures on CEDM and SynCEDM images for breast lesion detection and classification. SynCEDM images were generated with a structural similarity index (SSIM) of 0.94 ± 0.02. A YOLO lesion detector trained on original CEDM images led to a 91.34% accuracy, a 90.37% sensitivity, and a 92.06% specificity. In comparison, a detector trained on the SynCEDM images exhibited a comparable accuracy of 91.20%, a marginally higher sensitivity of 91.44%, and a slightly lower specificity of 91.30%. This approach not only aims to mitigate contrast agent risks but also to improve breast cancer detection and characterization using mammography.
一种深度学习方法,用于无造影剂的乳房病变检测和分类,使用对比增强乳房x光片的对抗合成
对比增强数字乳房x线摄影(CEDM)已成为一种很有前途的乳腺癌诊断补充成像方式,提供增强的病变可视化和提高诊断准确性,特别是对乳腺组织致密的患者。然而,CEDM对造影剂的依赖对患者的安全性和可及性提出了挑战。为了克服这些挑战,本文介绍了一种用于改进乳腺病变检测和分类的深度学习方法。特别是,基于循环一致生成对抗网络(CycleGAN)的图像到图像转换模型被用于从全视野数字乳房x线摄影生成合成CEDM (SynCEDM)图像,以增强视觉对比度感知,而无需造影剂。从2908名女性受试者中收集3958对低能量(LE)和CEDM图像,训练CycleGAN模型生成SynCEDM图像。因此,我们在CEDM和SynCEDM图像上训练了不同的You-Only-Look-Once (YOLO)架构,用于乳腺病变检测和分类。生成的SynCEDM图像的结构相似指数(SSIM)为0.94±0.02。基于原始CEDM图像训练的YOLO病变检测器准确率为91.34%,灵敏度为90.37%,特异性为92.06%。相比之下,在SynCEDM图像上训练的检测器的准确度为91.20%,灵敏度略高,为91.44%,特异性略低,为91.30%。这种方法不仅旨在降低造影剂的风险,而且还可以提高乳房x光检查对乳腺癌的检测和表征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信