Manar N. Amin , Muhammad A. Rushdi , Rasha Kamal , Amr Farouk , Mohamed Gomaa , Noha M. Fouad , Ahmed M. Mahmoud
{"title":"一种深度学习方法,用于无造影剂的乳房病变检测和分类,使用对比增强乳房x光片的对抗合成","authors":"Manar N. Amin , Muhammad A. Rushdi , Rasha Kamal , Amr Farouk , Mohamed Gomaa , Noha M. Fouad , Ahmed M. Mahmoud","doi":"10.1016/j.imavis.2025.105692","DOIUrl":null,"url":null,"abstract":"<div><div>Contrast-enhanced digital mammography (CEDM) has emerged as a promising complementary imaging modality for breast cancer diagnosis, offering enhanced lesion visualization and improved diagnostic accuracy, particularly for patients with dense breast tissues. However, the reliance of CEDM on contrast agents poses challenges to patient safety and accessibility. To overcome those challenges, this paper introduces a deep learning methodology for improved breast lesion detection and classification. In particular, an image-to-image translation model based on cycle-consistent generative adversarial networks (CycleGAN) is utilized to generate synthetic CEDM (SynCEDM) images from full-field digital mammography in order to enhance visual contrast perception without the need for contrast agents. A new dataset of 3958 pairs of low-energy (LE) and CEDM images was collected from 2908 female subjects to train the CycleGAN model to generate SynCEDM images. Thus, we trained different You-Only-Look-Once (YOLO) architectures on CEDM and SynCEDM images for breast lesion detection and classification. SynCEDM images were generated with a structural similarity index (SSIM) of 0.94 ± 0.02. A YOLO lesion detector trained on original CEDM images led to a 91.34% accuracy, a 90.37% sensitivity, and a 92.06% specificity. In comparison, a detector trained on the SynCEDM images exhibited a comparable accuracy of 91.20%, a marginally higher sensitivity of 91.44%, and a slightly lower specificity of 91.30%. This approach not only aims to mitigate contrast agent risks but also to improve breast cancer detection and characterization using mammography.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105692"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep learning approach for contrast-agent-free breast lesion detection and classification using adversarial synthesis of contrast-enhanced mammograms\",\"authors\":\"Manar N. Amin , Muhammad A. Rushdi , Rasha Kamal , Amr Farouk , Mohamed Gomaa , Noha M. Fouad , Ahmed M. Mahmoud\",\"doi\":\"10.1016/j.imavis.2025.105692\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Contrast-enhanced digital mammography (CEDM) has emerged as a promising complementary imaging modality for breast cancer diagnosis, offering enhanced lesion visualization and improved diagnostic accuracy, particularly for patients with dense breast tissues. However, the reliance of CEDM on contrast agents poses challenges to patient safety and accessibility. To overcome those challenges, this paper introduces a deep learning methodology for improved breast lesion detection and classification. In particular, an image-to-image translation model based on cycle-consistent generative adversarial networks (CycleGAN) is utilized to generate synthetic CEDM (SynCEDM) images from full-field digital mammography in order to enhance visual contrast perception without the need for contrast agents. A new dataset of 3958 pairs of low-energy (LE) and CEDM images was collected from 2908 female subjects to train the CycleGAN model to generate SynCEDM images. Thus, we trained different You-Only-Look-Once (YOLO) architectures on CEDM and SynCEDM images for breast lesion detection and classification. SynCEDM images were generated with a structural similarity index (SSIM) of 0.94 ± 0.02. A YOLO lesion detector trained on original CEDM images led to a 91.34% accuracy, a 90.37% sensitivity, and a 92.06% specificity. In comparison, a detector trained on the SynCEDM images exhibited a comparable accuracy of 91.20%, a marginally higher sensitivity of 91.44%, and a slightly lower specificity of 91.30%. This approach not only aims to mitigate contrast agent risks but also to improve breast cancer detection and characterization using mammography.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"162 \",\"pages\":\"Article 105692\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S026288562500280X\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S026288562500280X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A deep learning approach for contrast-agent-free breast lesion detection and classification using adversarial synthesis of contrast-enhanced mammograms
Contrast-enhanced digital mammography (CEDM) has emerged as a promising complementary imaging modality for breast cancer diagnosis, offering enhanced lesion visualization and improved diagnostic accuracy, particularly for patients with dense breast tissues. However, the reliance of CEDM on contrast agents poses challenges to patient safety and accessibility. To overcome those challenges, this paper introduces a deep learning methodology for improved breast lesion detection and classification. In particular, an image-to-image translation model based on cycle-consistent generative adversarial networks (CycleGAN) is utilized to generate synthetic CEDM (SynCEDM) images from full-field digital mammography in order to enhance visual contrast perception without the need for contrast agents. A new dataset of 3958 pairs of low-energy (LE) and CEDM images was collected from 2908 female subjects to train the CycleGAN model to generate SynCEDM images. Thus, we trained different You-Only-Look-Once (YOLO) architectures on CEDM and SynCEDM images for breast lesion detection and classification. SynCEDM images were generated with a structural similarity index (SSIM) of 0.94 ± 0.02. A YOLO lesion detector trained on original CEDM images led to a 91.34% accuracy, a 90.37% sensitivity, and a 92.06% specificity. In comparison, a detector trained on the SynCEDM images exhibited a comparable accuracy of 91.20%, a marginally higher sensitivity of 91.44%, and a slightly lower specificity of 91.30%. This approach not only aims to mitigate contrast agent risks but also to improve breast cancer detection and characterization using mammography.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.