Suya Li, Kaushik Dutta, Debojyoti Pal, Kooresh I. Shoghi
{"title":"噪声感知系统生成模型(NASGM):正电子发射断层扫描(PET)图像仿真框架与观测器验证研究","authors":"Suya Li, Kaushik Dutta, Debojyoti Pal, Kooresh I. Shoghi","doi":"10.1002/mp.17962","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Simulation of positron emission tomography (PET) images is critical in dynamic imaging protocol optimization, quantitative imaging metric development, deep learning applications, and virtual imaging trials. These applications rely heavily on large volumes of simulated PET data. However, the current state-of-the-art PET image simulation platform is time-prohibitive and computationally intensive. Although deep learning-based generative models have been widely applied to generate PET images, they often fail to adequately account for the differing acquisition times of PET images.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>This study seeks to develop and validate a novel deep learning-based method, the noise-aware system generative model (NASGM), to simulate PET images of different acquisition times.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>NASGM is based on the conditional generative adversarial network and features a novel dual-domain discriminator that contains a spatial and a frequency branch to leverage information from both domains. A transformer-based structure is applied for the frequency discriminator because of its ability to encode positional information and capture global dependencies. The study is conducted on a simulated dataset, with a public PET/CT dataset as the input activity and attenuation maps, and an analytical PET simulation tool to simulate PET images of different acquisition times. Ablation studies are carried out to confirm the necessity of adopting the dual-domain discriminator. A comprehensive suite of evaluations, including image fidelity assessment, noise measurement, quantitative accuracy validation, task-based assessment, texture analysis, and human observer study, is performed to confirm the realism of generated images. The Wilcoxon signed-rank test with Bonferroni correction is applied to compare the NASGM with other networks in the ablation study at an adjusted <i>p</i>-value <span></span><math>\n <semantics>\n <mrow>\n <mo>≤</mo>\n <mn>0.01</mn>\n </mrow>\n <annotation>$ \\le 0.01$</annotation>\n </semantics></math>, and the alignment of features between the generated and target images is measured by the concordance correlation coefficient (CCC).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The quantitative accuracy measured by the correlation of mean recovery coefficients of tumor groups, and the NASGM-generated images achieved CCC values of 0.95 across most of the acquisition times. This also illustrates NASGM's ability to replicate the partial volume effect in target images. Furthermore, NASGM was also demonstrated to generate images that exhibit noise characteristics and textures closely matching those in the target PET images. In a tumor detection task-based observer study, the synthesized images achieved comparable performance to target images in the clinically relevant task. In the two-alternative forced choice human observer study, human observers achieved an accuracy of ∼50% for all tested acquisition times, confirming that the synthesized and target images are visually indistinguishable for human observers. Moreover, NASGM displayed strong generalizability within the training range, successfully generating images of frame durations not included in the training dataset.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>NASGM is developed and validated as a deep learning-based PET simulation framework that offers computationally efficient image generation compared to traditional methods, making it an ideal tool for producing large volumes of simulated PET image datasets across varying acquisition times. Furthermore, the dual-domain discriminator enhances the quality of generated images, while the noise-aware mechanism introduces realistic, controllable noise variability.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 7","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17962","citationCount":"0","resultStr":"{\"title\":\"Noise-aware system generative model (NASGM): positron emission tomography (PET) image simulation framework with observer validation studies\",\"authors\":\"Suya Li, Kaushik Dutta, Debojyoti Pal, Kooresh I. Shoghi\",\"doi\":\"10.1002/mp.17962\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Simulation of positron emission tomography (PET) images is critical in dynamic imaging protocol optimization, quantitative imaging metric development, deep learning applications, and virtual imaging trials. These applications rely heavily on large volumes of simulated PET data. However, the current state-of-the-art PET image simulation platform is time-prohibitive and computationally intensive. Although deep learning-based generative models have been widely applied to generate PET images, they often fail to adequately account for the differing acquisition times of PET images.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>This study seeks to develop and validate a novel deep learning-based method, the noise-aware system generative model (NASGM), to simulate PET images of different acquisition times.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>NASGM is based on the conditional generative adversarial network and features a novel dual-domain discriminator that contains a spatial and a frequency branch to leverage information from both domains. A transformer-based structure is applied for the frequency discriminator because of its ability to encode positional information and capture global dependencies. The study is conducted on a simulated dataset, with a public PET/CT dataset as the input activity and attenuation maps, and an analytical PET simulation tool to simulate PET images of different acquisition times. Ablation studies are carried out to confirm the necessity of adopting the dual-domain discriminator. A comprehensive suite of evaluations, including image fidelity assessment, noise measurement, quantitative accuracy validation, task-based assessment, texture analysis, and human observer study, is performed to confirm the realism of generated images. The Wilcoxon signed-rank test with Bonferroni correction is applied to compare the NASGM with other networks in the ablation study at an adjusted <i>p</i>-value <span></span><math>\\n <semantics>\\n <mrow>\\n <mo>≤</mo>\\n <mn>0.01</mn>\\n </mrow>\\n <annotation>$ \\\\le 0.01$</annotation>\\n </semantics></math>, and the alignment of features between the generated and target images is measured by the concordance correlation coefficient (CCC).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The quantitative accuracy measured by the correlation of mean recovery coefficients of tumor groups, and the NASGM-generated images achieved CCC values of 0.95 across most of the acquisition times. This also illustrates NASGM's ability to replicate the partial volume effect in target images. Furthermore, NASGM was also demonstrated to generate images that exhibit noise characteristics and textures closely matching those in the target PET images. In a tumor detection task-based observer study, the synthesized images achieved comparable performance to target images in the clinically relevant task. In the two-alternative forced choice human observer study, human observers achieved an accuracy of ∼50% for all tested acquisition times, confirming that the synthesized and target images are visually indistinguishable for human observers. Moreover, NASGM displayed strong generalizability within the training range, successfully generating images of frame durations not included in the training dataset.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>NASGM is developed and validated as a deep learning-based PET simulation framework that offers computationally efficient image generation compared to traditional methods, making it an ideal tool for producing large volumes of simulated PET image datasets across varying acquisition times. Furthermore, the dual-domain discriminator enhances the quality of generated images, while the noise-aware mechanism introduces realistic, controllable noise variability.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"52 7\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/mp.17962\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17962\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17962","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Noise-aware system generative model (NASGM): positron emission tomography (PET) image simulation framework with observer validation studies
Background
Simulation of positron emission tomography (PET) images is critical in dynamic imaging protocol optimization, quantitative imaging metric development, deep learning applications, and virtual imaging trials. These applications rely heavily on large volumes of simulated PET data. However, the current state-of-the-art PET image simulation platform is time-prohibitive and computationally intensive. Although deep learning-based generative models have been widely applied to generate PET images, they often fail to adequately account for the differing acquisition times of PET images.
Purpose
This study seeks to develop and validate a novel deep learning-based method, the noise-aware system generative model (NASGM), to simulate PET images of different acquisition times.
Methods
NASGM is based on the conditional generative adversarial network and features a novel dual-domain discriminator that contains a spatial and a frequency branch to leverage information from both domains. A transformer-based structure is applied for the frequency discriminator because of its ability to encode positional information and capture global dependencies. The study is conducted on a simulated dataset, with a public PET/CT dataset as the input activity and attenuation maps, and an analytical PET simulation tool to simulate PET images of different acquisition times. Ablation studies are carried out to confirm the necessity of adopting the dual-domain discriminator. A comprehensive suite of evaluations, including image fidelity assessment, noise measurement, quantitative accuracy validation, task-based assessment, texture analysis, and human observer study, is performed to confirm the realism of generated images. The Wilcoxon signed-rank test with Bonferroni correction is applied to compare the NASGM with other networks in the ablation study at an adjusted p-value , and the alignment of features between the generated and target images is measured by the concordance correlation coefficient (CCC).
Results
The quantitative accuracy measured by the correlation of mean recovery coefficients of tumor groups, and the NASGM-generated images achieved CCC values of 0.95 across most of the acquisition times. This also illustrates NASGM's ability to replicate the partial volume effect in target images. Furthermore, NASGM was also demonstrated to generate images that exhibit noise characteristics and textures closely matching those in the target PET images. In a tumor detection task-based observer study, the synthesized images achieved comparable performance to target images in the clinically relevant task. In the two-alternative forced choice human observer study, human observers achieved an accuracy of ∼50% for all tested acquisition times, confirming that the synthesized and target images are visually indistinguishable for human observers. Moreover, NASGM displayed strong generalizability within the training range, successfully generating images of frame durations not included in the training dataset.
Conclusions
NASGM is developed and validated as a deep learning-based PET simulation framework that offers computationally efficient image generation compared to traditional methods, making it an ideal tool for producing large volumes of simulated PET image datasets across varying acquisition times. Furthermore, the dual-domain discriminator enhances the quality of generated images, while the noise-aware mechanism introduces realistic, controllable noise variability.
期刊介绍:
Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments
Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.