Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang
{"title":"不同计数级PET去噪的双重提示。","authors":"Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 <sup>18</sup>F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12083701/pdf/","citationCount":"0","resultStr":"{\"title\":\"DUAL PROMPTING FOR DIVERSE COUNT-LEVEL PET DENOISING.\",\"authors\":\"Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 <sup>18</sup>F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.</p>\",\"PeriodicalId\":93888,\"journal\":{\"name\":\"ArXiv\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12083701/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ArXiv\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
待去噪的正电子发射断层扫描(PET)体积具有不同的计数水平,这给统一模型解决各种情况带来了挑战。在这项工作中,我们利用最近蓬勃发展的提示学习来实现不同计数水平的可泛化PET去噪。具体来说,我们提出了双重提示,以分而治之的方式来指导PET去噪,即一个明确的计数级提示来提供特定的先验信息,一个隐式的一般去噪提示来编码基本的PET去噪知识。然后,开发了一种新的提示融合模块来统一异构提示,然后开发了提示-特征交互模块将提示注入到特征中。提示符能够动态地引导噪声条件下的去噪过程。因此,我们能够有效地训练不同计数级别的统一去噪模型,并通过个性化提示将其部署到不同的情况下。我们从97 $^{18}$F-MK6240 tau PET研究中均匀随机选择13- 22%的事件分数,对1940个低计数PET 3D体积进行了评估。结果表明,我们的双提示可以在很大程度上提高知情计数级的性能,并且优于计数条件模型。
DUAL PROMPTING FOR DIVERSE COUNT-LEVEL PET DENOISING.
The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 18F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.