Abstract 5016: Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis
Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang
{"title":"Abstract 5016: Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis","authors":"Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang","doi":"10.1158/1538-7445.am2025-5016","DOIUrl":null,"url":null,"abstract":"Prostate cancer remains the second leading cause of cancer-related mortality among men globally, underscoring the critical need for early detection of treatment failure and effective assessment of metastatic risk. Positron Emission Tomography (PET), particularly with prostate-specific membrane antigen (PSMA) tracers, has demonstrated superior sensitivity for identifying prostate lesions, including metastases. However, the accessibility of PET imaging is often limited by its high costs and associated radiation exposure. To overcome these challenges, we developed a deep learning model to synthesize PET images from Magnetic Resonance Imaging (MRI) image, facilitating treatment response evaluation.High-resolution T2-weighted MRI and PSMA PET images acquired within a close timeframe were retrieved from 10 prostate cancer patients, who underwent definitive radiotherapy. The PSMA-PET scans were registered to the MRI images within the Eclipse (Varian Medical Systems), and the PET images were cropped to match the same size and resolution of MRI images, resulting in 321 pairs of MRI-PET 2D images. Preprocessing included grayscale transformation, z-score normalization, and pixel value inversion to enhance model learning. A Pix2Pix framework was implemented, employing a U-Net generator and a PatchGAN discriminator. The loss function used consisted of a combination of adversarial loss, to ensure the realism of the generated images, and L1 loss, to maintain pixel-wise consistency between the generated and target images. Model evaluation was performed using leave-one-out-cross-validation (LOOCV), where all slices from one patient was used for testing and the remaining 9 patients for training.The model achieved an average Peak Signal-to-Noise Ratio (PSNR) of 14.55 and a Structural Similarity Index Measure (SSIM) of 0.648. Although these quantitative metrics were moderate, qualitative evaluation demonstrated precise and clinically meaningful localization of lesions, offering utility for aiding physician visualization of high-risk regions.This study highlighted the feasibility of leveraging Pix2Pix-based conditional generative models for synthesizing PET-equivalent images from MRI data as a cost-effective alternative to enhance prostate cancer imaging. Future efforts will focus on expanding the dataset and investigating advanced architectures, including 3D-to-3D generative models and diffusion techniques, to further improve the accuracy of prostate lesion localization and clinical applicability. Citation Format: Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang. Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular s); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1): nr 5016.","PeriodicalId":9441,"journal":{"name":"Cancer research","volume":"64 1","pages":""},"PeriodicalIF":12.5000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1158/1538-7445.am2025-5016","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Prostate cancer remains the second leading cause of cancer-related mortality among men globally, underscoring the critical need for early detection of treatment failure and effective assessment of metastatic risk. Positron Emission Tomography (PET), particularly with prostate-specific membrane antigen (PSMA) tracers, has demonstrated superior sensitivity for identifying prostate lesions, including metastases. However, the accessibility of PET imaging is often limited by its high costs and associated radiation exposure. To overcome these challenges, we developed a deep learning model to synthesize PET images from Magnetic Resonance Imaging (MRI) image, facilitating treatment response evaluation.High-resolution T2-weighted MRI and PSMA PET images acquired within a close timeframe were retrieved from 10 prostate cancer patients, who underwent definitive radiotherapy. The PSMA-PET scans were registered to the MRI images within the Eclipse (Varian Medical Systems), and the PET images were cropped to match the same size and resolution of MRI images, resulting in 321 pairs of MRI-PET 2D images. Preprocessing included grayscale transformation, z-score normalization, and pixel value inversion to enhance model learning. A Pix2Pix framework was implemented, employing a U-Net generator and a PatchGAN discriminator. The loss function used consisted of a combination of adversarial loss, to ensure the realism of the generated images, and L1 loss, to maintain pixel-wise consistency between the generated and target images. Model evaluation was performed using leave-one-out-cross-validation (LOOCV), where all slices from one patient was used for testing and the remaining 9 patients for training.The model achieved an average Peak Signal-to-Noise Ratio (PSNR) of 14.55 and a Structural Similarity Index Measure (SSIM) of 0.648. Although these quantitative metrics were moderate, qualitative evaluation demonstrated precise and clinically meaningful localization of lesions, offering utility for aiding physician visualization of high-risk regions.This study highlighted the feasibility of leveraging Pix2Pix-based conditional generative models for synthesizing PET-equivalent images from MRI data as a cost-effective alternative to enhance prostate cancer imaging. Future efforts will focus on expanding the dataset and investigating advanced architectures, including 3D-to-3D generative models and diffusion techniques, to further improve the accuracy of prostate lesion localization and clinical applicability. Citation Format: Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang. Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular s); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1): nr 5016.
期刊介绍:
Cancer Research, published by the American Association for Cancer Research (AACR), is a journal that focuses on impactful original studies, reviews, and opinion pieces relevant to the broad cancer research community. Manuscripts that present conceptual or technological advances leading to insights into cancer biology are particularly sought after. The journal also places emphasis on convergence science, which involves bridging multiple distinct areas of cancer research.
With primary subsections including Cancer Biology, Cancer Immunology, Cancer Metabolism and Molecular Mechanisms, Translational Cancer Biology, Cancer Landscapes, and Convergence Science, Cancer Research has a comprehensive scope. It is published twice a month and has one volume per year, with a print ISSN of 0008-5472 and an online ISSN of 1538-7445.
Cancer Research is abstracted and/or indexed in various databases and platforms, including BIOSIS Previews (R) Database, MEDLINE, Current Contents/Life Sciences, Current Contents/Clinical Medicine, Science Citation Index, Scopus, and Web of Science.