噪声感知系统生成模型(NASGM):正电子发射断层扫描(PET)图像仿真框架与观测器验证研究

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2025-07-15 DOI:10.1002/mp.17962
Suya Li, Kaushik Dutta, Debojyoti Pal, Kooresh I. Shoghi
{"title":"噪声感知系统生成模型(NASGM):正电子发射断层扫描(PET)图像仿真框架与观测器验证研究","authors":"Suya Li,&nbsp;Kaushik Dutta,&nbsp;Debojyoti Pal,&nbsp;Kooresh I. Shoghi","doi":"10.1002/mp.17962","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Simulation of positron emission tomography (PET) images is critical in dynamic imaging protocol optimization, quantitative imaging metric development, deep learning applications, and virtual imaging trials. These applications rely heavily on large volumes of simulated PET data. However, the current state-of-the-art PET image simulation platform is time-prohibitive and computationally intensive. Although deep learning-based generative models have been widely applied to generate PET images, they often fail to adequately account for the differing acquisition times of PET images.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>This study seeks to develop and validate a novel deep learning-based method, the noise-aware system generative model (NASGM), to simulate PET images of different acquisition times.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>NASGM is based on the conditional generative adversarial network and features a novel dual-domain discriminator that contains a spatial and a frequency branch to leverage information from both domains. A transformer-based structure is applied for the frequency discriminator because of its ability to encode positional information and capture global dependencies. The study is conducted on a simulated dataset, with a public PET/CT dataset as the input activity and attenuation maps, and an analytical PET simulation tool to simulate PET images of different acquisition times. Ablation studies are carried out to confirm the necessity of adopting the dual-domain discriminator. A comprehensive suite of evaluations, including image fidelity assessment, noise measurement, quantitative accuracy validation, task-based assessment, texture analysis, and human observer study, is performed to confirm the realism of generated images. The Wilcoxon signed-rank test with Bonferroni correction is applied to compare the NASGM with other networks in the ablation study at an adjusted <i>p</i>-value <span></span><math>\n <semantics>\n <mrow>\n <mo>≤</mo>\n <mn>0.01</mn>\n </mrow>\n <annotation>$ \\le 0.01$</annotation>\n </semantics></math>, and the alignment of features between the generated and target images is measured by the concordance correlation coefficient (CCC).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The quantitative accuracy measured by the correlation of mean recovery coefficients of tumor groups, and the NASGM-generated images achieved CCC values of 0.95 across most of the acquisition times. This also illustrates NASGM's ability to replicate the partial volume effect in target images. Furthermore, NASGM was also demonstrated to generate images that exhibit noise characteristics and textures closely matching those in the target PET images. In a tumor detection task-based observer study, the synthesized images achieved comparable performance to target images in the clinically relevant task. In the two-alternative forced choice human observer study, human observers achieved an accuracy of ∼50% for all tested acquisition times, confirming that the synthesized and target images are visually indistinguishable for human observers. Moreover, NASGM displayed strong generalizability within the training range, successfully generating images of frame durations not included in the training dataset.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>NASGM is developed and validated as a deep learning-based PET simulation framework that offers computationally efficient image generation compared to traditional methods, making it an ideal tool for producing large volumes of simulated PET image datasets across varying acquisition times. Furthermore, the dual-domain discriminator enhances the quality of generated images, while the noise-aware mechanism introduces realistic, controllable noise variability.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 7","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17962","citationCount":"0","resultStr":"{\"title\":\"Noise-aware system generative model (NASGM): positron emission tomography (PET) image simulation framework with observer validation studies\",\"authors\":\"Suya Li,&nbsp;Kaushik Dutta,&nbsp;Debojyoti Pal,&nbsp;Kooresh I. Shoghi\",\"doi\":\"10.1002/mp.17962\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Simulation of positron emission tomography (PET) images is critical in dynamic imaging protocol optimization, quantitative imaging metric development, deep learning applications, and virtual imaging trials. These applications rely heavily on large volumes of simulated PET data. However, the current state-of-the-art PET image simulation platform is time-prohibitive and computationally intensive. Although deep learning-based generative models have been widely applied to generate PET images, they often fail to adequately account for the differing acquisition times of PET images.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>This study seeks to develop and validate a novel deep learning-based method, the noise-aware system generative model (NASGM), to simulate PET images of different acquisition times.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>NASGM is based on the conditional generative adversarial network and features a novel dual-domain discriminator that contains a spatial and a frequency branch to leverage information from both domains. A transformer-based structure is applied for the frequency discriminator because of its ability to encode positional information and capture global dependencies. The study is conducted on a simulated dataset, with a public PET/CT dataset as the input activity and attenuation maps, and an analytical PET simulation tool to simulate PET images of different acquisition times. Ablation studies are carried out to confirm the necessity of adopting the dual-domain discriminator. A comprehensive suite of evaluations, including image fidelity assessment, noise measurement, quantitative accuracy validation, task-based assessment, texture analysis, and human observer study, is performed to confirm the realism of generated images. The Wilcoxon signed-rank test with Bonferroni correction is applied to compare the NASGM with other networks in the ablation study at an adjusted <i>p</i>-value <span></span><math>\\n <semantics>\\n <mrow>\\n <mo>≤</mo>\\n <mn>0.01</mn>\\n </mrow>\\n <annotation>$ \\\\le 0.01$</annotation>\\n </semantics></math>, and the alignment of features between the generated and target images is measured by the concordance correlation coefficient (CCC).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The quantitative accuracy measured by the correlation of mean recovery coefficients of tumor groups, and the NASGM-generated images achieved CCC values of 0.95 across most of the acquisition times. This also illustrates NASGM's ability to replicate the partial volume effect in target images. Furthermore, NASGM was also demonstrated to generate images that exhibit noise characteristics and textures closely matching those in the target PET images. In a tumor detection task-based observer study, the synthesized images achieved comparable performance to target images in the clinically relevant task. In the two-alternative forced choice human observer study, human observers achieved an accuracy of ∼50% for all tested acquisition times, confirming that the synthesized and target images are visually indistinguishable for human observers. Moreover, NASGM displayed strong generalizability within the training range, successfully generating images of frame durations not included in the training dataset.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>NASGM is developed and validated as a deep learning-based PET simulation framework that offers computationally efficient image generation compared to traditional methods, making it an ideal tool for producing large volumes of simulated PET image datasets across varying acquisition times. Furthermore, the dual-domain discriminator enhances the quality of generated images, while the noise-aware mechanism introduces realistic, controllable noise variability.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"52 7\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17962\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17962\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17962","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

正电子发射断层扫描(PET)图像的仿真在动态成像方案优化、定量成像度量开发、深度学习应用和虚拟成像试验中至关重要。这些应用严重依赖于大量的模拟PET数据。然而,目前最先进的PET图像仿真平台是时间限制和计算密集型的。尽管基于深度学习的生成模型已被广泛应用于生成PET图像,但它们往往不能充分考虑PET图像的不同采集时间。本研究旨在开发和验证一种新的基于深度学习的方法,即噪声感知系统生成模型(NASGM),以模拟不同采集时间的PET图像。方法NASGM基于条件生成对抗网络,并具有一种新的双域鉴别器,该鉴别器包含空间分支和频率分支,以利用两个域的信息。频率鉴别器采用基于变压器的结构,因为它具有编码位置信息和捕获全局依赖关系的能力。本研究在模拟数据集上进行,以公开的PET/CT数据集作为输入活度图和衰减图,利用分析性PET模拟工具模拟不同采集时间的PET图像。通过烧蚀实验,证实了采用双域鉴别器的必要性。一套全面的评估,包括图像保真度评估、噪声测量、定量精度验证、基于任务的评估、纹理分析和人类观察者研究,以确认生成图像的真实感。采用Bonferroni校正的Wilcoxon符号秩检验将NASGM与消融研究中的其他网络在调整后的p值≤0.01$ \le 0.01$下进行比较。通过一致性相关系数(CCC)来衡量生成图像与目标图像之间特征的匹配程度。结果通过肿瘤组平均恢复系数的相关性来衡量定量精度,在大多数采集次数下,nasgm生成的图像的CCC值达到0.95。这也说明了NASGM在目标图像中复制部分体积效应的能力。此外,NASGM还被证明可以生成具有与目标PET图像密切匹配的噪声特征和纹理的图像。在一项基于肿瘤检测任务的观察者研究中,合成图像在临床相关任务中取得了与目标图像相当的性能。在两种选择的强迫选择人类观察者研究中,人类观察者在所有测试的采集时间内都达到了约50%的准确性,证实了人类观察者在视觉上无法区分合成图像和目标图像。此外,NASGM在训练范围内表现出较强的泛化能力,成功生成了训练数据集中未包含的帧持续时间的图像。NASGM是一种基于深度学习的PET模拟框架,与传统方法相比,它提供了计算效率更高的图像生成,使其成为在不同采集时间内生成大量模拟PET图像数据集的理想工具。此外,双域鉴别器提高了生成图像的质量,而噪声感知机制引入了真实、可控的噪声变异性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Noise-aware system generative model (NASGM): positron emission tomography (PET) image simulation framework with observer validation studies

Noise-aware system generative model (NASGM): positron emission tomography (PET) image simulation framework with observer validation studies

Background

Simulation of positron emission tomography (PET) images is critical in dynamic imaging protocol optimization, quantitative imaging metric development, deep learning applications, and virtual imaging trials. These applications rely heavily on large volumes of simulated PET data. However, the current state-of-the-art PET image simulation platform is time-prohibitive and computationally intensive. Although deep learning-based generative models have been widely applied to generate PET images, they often fail to adequately account for the differing acquisition times of PET images.

Purpose

This study seeks to develop and validate a novel deep learning-based method, the noise-aware system generative model (NASGM), to simulate PET images of different acquisition times.

Methods

NASGM is based on the conditional generative adversarial network and features a novel dual-domain discriminator that contains a spatial and a frequency branch to leverage information from both domains. A transformer-based structure is applied for the frequency discriminator because of its ability to encode positional information and capture global dependencies. The study is conducted on a simulated dataset, with a public PET/CT dataset as the input activity and attenuation maps, and an analytical PET simulation tool to simulate PET images of different acquisition times. Ablation studies are carried out to confirm the necessity of adopting the dual-domain discriminator. A comprehensive suite of evaluations, including image fidelity assessment, noise measurement, quantitative accuracy validation, task-based assessment, texture analysis, and human observer study, is performed to confirm the realism of generated images. The Wilcoxon signed-rank test with Bonferroni correction is applied to compare the NASGM with other networks in the ablation study at an adjusted p-value  0.01 $ \le 0.01$ , and the alignment of features between the generated and target images is measured by the concordance correlation coefficient (CCC).

Results

The quantitative accuracy measured by the correlation of mean recovery coefficients of tumor groups, and the NASGM-generated images achieved CCC values of 0.95 across most of the acquisition times. This also illustrates NASGM's ability to replicate the partial volume effect in target images. Furthermore, NASGM was also demonstrated to generate images that exhibit noise characteristics and textures closely matching those in the target PET images. In a tumor detection task-based observer study, the synthesized images achieved comparable performance to target images in the clinically relevant task. In the two-alternative forced choice human observer study, human observers achieved an accuracy of ∼50% for all tested acquisition times, confirming that the synthesized and target images are visually indistinguishable for human observers. Moreover, NASGM displayed strong generalizability within the training range, successfully generating images of frame durations not included in the training dataset.

Conclusions

NASGM is developed and validated as a deep learning-based PET simulation framework that offers computationally efficient image generation compared to traditional methods, making it an ideal tool for producing large volumes of simulated PET image datasets across varying acquisition times. Furthermore, the dual-domain discriminator enhances the quality of generated images, while the noise-aware mechanism introduces realistic, controllable noise variability.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信