5016:使用条件生成模型增强前列腺盆腔多模态数据生成:基于pix2pixi的mri - pet合成方法

IF 12.5 1区 医学 Q1 ONCOLOGY
Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang
{"title":"5016:使用条件生成模型增强前列腺盆腔多模态数据生成:基于pix2pixi的mri - pet合成方法","authors":"Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang","doi":"10.1158/1538-7445.am2025-5016","DOIUrl":null,"url":null,"abstract":"Prostate cancer remains the second leading cause of cancer-related mortality among men globally, underscoring the critical need for early detection of treatment failure and effective assessment of metastatic risk. Positron Emission Tomography (PET), particularly with prostate-specific membrane antigen (PSMA) tracers, has demonstrated superior sensitivity for identifying prostate lesions, including metastases. However, the accessibility of PET imaging is often limited by its high costs and associated radiation exposure. To overcome these challenges, we developed a deep learning model to synthesize PET images from Magnetic Resonance Imaging (MRI) image, facilitating treatment response evaluation.High-resolution T2-weighted MRI and PSMA PET images acquired within a close timeframe were retrieved from 10 prostate cancer patients, who underwent definitive radiotherapy. The PSMA-PET scans were registered to the MRI images within the Eclipse (Varian Medical Systems), and the PET images were cropped to match the same size and resolution of MRI images, resulting in 321 pairs of MRI-PET 2D images. Preprocessing included grayscale transformation, z-score normalization, and pixel value inversion to enhance model learning. A Pix2Pix framework was implemented, employing a U-Net generator and a PatchGAN discriminator. The loss function used consisted of a combination of adversarial loss, to ensure the realism of the generated images, and L1 loss, to maintain pixel-wise consistency between the generated and target images. Model evaluation was performed using leave-one-out-cross-validation (LOOCV), where all slices from one patient was used for testing and the remaining 9 patients for training.The model achieved an average Peak Signal-to-Noise Ratio (PSNR) of 14.55 and a Structural Similarity Index Measure (SSIM) of 0.648. Although these quantitative metrics were moderate, qualitative evaluation demonstrated precise and clinically meaningful localization of lesions, offering utility for aiding physician visualization of high-risk regions.This study highlighted the feasibility of leveraging Pix2Pix-based conditional generative models for synthesizing PET-equivalent images from MRI data as a cost-effective alternative to enhance prostate cancer imaging. Future efforts will focus on expanding the dataset and investigating advanced architectures, including 3D-to-3D generative models and diffusion techniques, to further improve the accuracy of prostate lesion localization and clinical applicability. Citation Format: Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang. Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular s); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1): nr 5016.","PeriodicalId":9441,"journal":{"name":"Cancer research","volume":"64 1","pages":""},"PeriodicalIF":12.5000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Abstract 5016: Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis\",\"authors\":\"Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang\",\"doi\":\"10.1158/1538-7445.am2025-5016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Prostate cancer remains the second leading cause of cancer-related mortality among men globally, underscoring the critical need for early detection of treatment failure and effective assessment of metastatic risk. Positron Emission Tomography (PET), particularly with prostate-specific membrane antigen (PSMA) tracers, has demonstrated superior sensitivity for identifying prostate lesions, including metastases. However, the accessibility of PET imaging is often limited by its high costs and associated radiation exposure. To overcome these challenges, we developed a deep learning model to synthesize PET images from Magnetic Resonance Imaging (MRI) image, facilitating treatment response evaluation.High-resolution T2-weighted MRI and PSMA PET images acquired within a close timeframe were retrieved from 10 prostate cancer patients, who underwent definitive radiotherapy. The PSMA-PET scans were registered to the MRI images within the Eclipse (Varian Medical Systems), and the PET images were cropped to match the same size and resolution of MRI images, resulting in 321 pairs of MRI-PET 2D images. Preprocessing included grayscale transformation, z-score normalization, and pixel value inversion to enhance model learning. A Pix2Pix framework was implemented, employing a U-Net generator and a PatchGAN discriminator. The loss function used consisted of a combination of adversarial loss, to ensure the realism of the generated images, and L1 loss, to maintain pixel-wise consistency between the generated and target images. Model evaluation was performed using leave-one-out-cross-validation (LOOCV), where all slices from one patient was used for testing and the remaining 9 patients for training.The model achieved an average Peak Signal-to-Noise Ratio (PSNR) of 14.55 and a Structural Similarity Index Measure (SSIM) of 0.648. Although these quantitative metrics were moderate, qualitative evaluation demonstrated precise and clinically meaningful localization of lesions, offering utility for aiding physician visualization of high-risk regions.This study highlighted the feasibility of leveraging Pix2Pix-based conditional generative models for synthesizing PET-equivalent images from MRI data as a cost-effective alternative to enhance prostate cancer imaging. Future efforts will focus on expanding the dataset and investigating advanced architectures, including 3D-to-3D generative models and diffusion techniques, to further improve the accuracy of prostate lesion localization and clinical applicability. Citation Format: Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang. Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular s); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1): nr 5016.\",\"PeriodicalId\":9441,\"journal\":{\"name\":\"Cancer research\",\"volume\":\"64 1\",\"pages\":\"\"},\"PeriodicalIF\":12.5000,\"publicationDate\":\"2025-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cancer research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1158/1538-7445.am2025-5016\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1158/1538-7445.am2025-5016","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

前列腺癌仍然是全球男性癌症相关死亡的第二大原因,这强调了早期发现治疗失败和有效评估转移风险的迫切需要。正电子发射断层扫描(PET),特别是前列腺特异性膜抗原(PSMA)示踪剂,在识别前列腺病变(包括转移)方面表现出优越的敏感性。然而,PET成像的可及性往往受到其高成本和相关辐射暴露的限制。为了克服这些挑战,我们开发了一个深度学习模型,从磁共振成像(MRI)图像合成PET图像,促进治疗反应评估。高分辨率t2加权MRI和PSMA PET图像从10名接受明确放疗的前列腺癌患者中检索。将PSMA-PET扫描与Eclipse (Varian Medical Systems)中的MRI图像注册,并裁剪PET图像以匹配MRI图像的相同尺寸和分辨率,从而生成321对MRI-PET 2D图像。预处理包括灰度变换、z-score归一化和像素值反演,以增强模型学习。采用U-Net生成器和PatchGAN鉴别器实现了Pix2Pix框架。所使用的损失函数由对抗损失(用于确保生成图像的真实感)和L1损失(用于保持生成图像与目标图像之间的像素一致性)组成。采用留一交叉验证(LOOCV)进行模型评估,其中来自一名患者的所有切片用于测试,其余9名患者用于训练。该模型的平均峰值信噪比(PSNR)为14.55,结构相似指数度量(SSIM)为0.648。虽然这些定量指标是中等的,但定性评估显示了精确和有临床意义的病灶定位,为帮助医生可视化高风险区域提供了实用工具。这项研究强调了利用基于pix2pixs的条件生成模型从MRI数据合成pet等效图像的可行性,作为一种具有成本效益的替代方案来增强前列腺癌成像。未来的工作将集中在扩展数据集和研究先进的架构,包括3D-to-3D生成模型和扩散技术,以进一步提高前列腺病变定位的准确性和临床适用性。引用格式:刘若冰,王硕,万世标,王洁琼。使用条件生成模型增强前列腺盆腔多模态数据生成:基于pix2pixi的mri - pet合成方法[摘要]。摘自:《2025年美国癌症研究协会年会论文集》;第1部分(常规);2025年4月25日至30日;费城(PA): AACR;中国癌症杂志,2015;31(8):391 - 391。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Abstract 5016: Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis
Prostate cancer remains the second leading cause of cancer-related mortality among men globally, underscoring the critical need for early detection of treatment failure and effective assessment of metastatic risk. Positron Emission Tomography (PET), particularly with prostate-specific membrane antigen (PSMA) tracers, has demonstrated superior sensitivity for identifying prostate lesions, including metastases. However, the accessibility of PET imaging is often limited by its high costs and associated radiation exposure. To overcome these challenges, we developed a deep learning model to synthesize PET images from Magnetic Resonance Imaging (MRI) image, facilitating treatment response evaluation.High-resolution T2-weighted MRI and PSMA PET images acquired within a close timeframe were retrieved from 10 prostate cancer patients, who underwent definitive radiotherapy. The PSMA-PET scans were registered to the MRI images within the Eclipse (Varian Medical Systems), and the PET images were cropped to match the same size and resolution of MRI images, resulting in 321 pairs of MRI-PET 2D images. Preprocessing included grayscale transformation, z-score normalization, and pixel value inversion to enhance model learning. A Pix2Pix framework was implemented, employing a U-Net generator and a PatchGAN discriminator. The loss function used consisted of a combination of adversarial loss, to ensure the realism of the generated images, and L1 loss, to maintain pixel-wise consistency between the generated and target images. Model evaluation was performed using leave-one-out-cross-validation (LOOCV), where all slices from one patient was used for testing and the remaining 9 patients for training.The model achieved an average Peak Signal-to-Noise Ratio (PSNR) of 14.55 and a Structural Similarity Index Measure (SSIM) of 0.648. Although these quantitative metrics were moderate, qualitative evaluation demonstrated precise and clinically meaningful localization of lesions, offering utility for aiding physician visualization of high-risk regions.This study highlighted the feasibility of leveraging Pix2Pix-based conditional generative models for synthesizing PET-equivalent images from MRI data as a cost-effective alternative to enhance prostate cancer imaging. Future efforts will focus on expanding the dataset and investigating advanced architectures, including 3D-to-3D generative models and diffusion techniques, to further improve the accuracy of prostate lesion localization and clinical applicability. Citation Format: Ruobing Liu, Shuo Wang, Shibiao Wan, Jieqiong Wang. Enhancing prostate pelvic multimodality data generating with conditional generative models: A Pix2Pix-based approach for MRI-to-PET synthesis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular s); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1): nr 5016.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cancer research
Cancer research 医学-肿瘤学
CiteScore
16.10
自引率
0.90%
发文量
7677
审稿时长
2.5 months
期刊介绍: Cancer Research, published by the American Association for Cancer Research (AACR), is a journal that focuses on impactful original studies, reviews, and opinion pieces relevant to the broad cancer research community. Manuscripts that present conceptual or technological advances leading to insights into cancer biology are particularly sought after. The journal also places emphasis on convergence science, which involves bridging multiple distinct areas of cancer research. With primary subsections including Cancer Biology, Cancer Immunology, Cancer Metabolism and Molecular Mechanisms, Translational Cancer Biology, Cancer Landscapes, and Convergence Science, Cancer Research has a comprehensive scope. It is published twice a month and has one volume per year, with a print ISSN of 0008-5472 and an online ISSN of 1538-7445. Cancer Research is abstracted and/or indexed in various databases and platforms, including BIOSIS Previews (R) Database, MEDLINE, Current Contents/Life Sciences, Current Contents/Clinical Medicine, Science Citation Index, Scopus, and Web of Science.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信