Subject-aware PET Denoising with Contrastive Adversarial Domain Generalization.

X Liu, T Marin, S Vafay Eslahi, A Tiss, Y Chemli, K A Johson, G El Fakhri, J Ouyang
{"title":"Subject-aware PET Denoising with Contrastive Adversarial Domain Generalization.","authors":"X Liu, T Marin, S Vafay Eslahi, A Tiss, Y Chemli, K A Johson, G El Fakhri, J Ouyang","doi":"10.1109/nss/mic/rtsd57108.2024.10656150","DOIUrl":null,"url":null,"abstract":"<p><p>Recent advances in deep learning (DL) have greatly improved the performance of positron emission tomography (PET) denoising performance. However, DL model performance can vary a lot across subjects, due to the large variability of the count levels and spatial distributions. A generalizable DL model that mitigates the subject-wise variations is highly expected toward a reliable and trustworthy system for clinical application. In this work, we propose a contrastive adversarial learning framework for subject-wise domain generalization (DG). Specifically, we configure a contrastive discriminator in addition to the UNet-based denoising module to check the subject-related information in the bottleneck feature, while the denoising module is adversarially trained to enforce the extraction of subject-invariant features. The sampled low-count realizations from the list-mode data are used as anchor-positive pairs to be close to each other, while the other subjects are used as negative samples to be distributed far away. We evaluated on 97 <sup>18</sup>F-MK6240 tau PET studies, each having 20 noise realizations with 25% fractions of events. Training, validation, and testing were implemented using 1400, 120, and 420 pairs of 3D image volumes in a subject-independent manner. The proposed contrastive adversarial DG demonstrated superior denoising performance than conventional UNet without subject-wise DG and cross-entropy-based adversarial DG.</p>","PeriodicalId":73298,"journal":{"name":"IEEE Nuclear Science Symposium conference record. Nuclear Science Symposium","volume":"2024 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11497478/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Nuclear Science Symposium conference record. Nuclear Science Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/nss/mic/rtsd57108.2024.10656150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/9/25 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advances in deep learning (DL) have greatly improved the performance of positron emission tomography (PET) denoising performance. However, DL model performance can vary a lot across subjects, due to the large variability of the count levels and spatial distributions. A generalizable DL model that mitigates the subject-wise variations is highly expected toward a reliable and trustworthy system for clinical application. In this work, we propose a contrastive adversarial learning framework for subject-wise domain generalization (DG). Specifically, we configure a contrastive discriminator in addition to the UNet-based denoising module to check the subject-related information in the bottleneck feature, while the denoising module is adversarially trained to enforce the extraction of subject-invariant features. The sampled low-count realizations from the list-mode data are used as anchor-positive pairs to be close to each other, while the other subjects are used as negative samples to be distributed far away. We evaluated on 97 18F-MK6240 tau PET studies, each having 20 noise realizations with 25% fractions of events. Training, validation, and testing were implemented using 1400, 120, and 420 pairs of 3D image volumes in a subject-independent manner. The proposed contrastive adversarial DG demonstrated superior denoising performance than conventional UNet without subject-wise DG and cross-entropy-based adversarial DG.

利用对比性对抗领域泛化技术实现受试者感知的 PET 去噪。
深度学习(DL)的最新进展大大提高了正电子发射断层扫描(PET)去噪性能。然而,由于计数水平和空间分布的巨大差异,DL 模型在不同受试者身上的表现会有很大不同。为了在临床应用中建立一个可靠、可信的系统,我们非常期待一个可减轻受试者差异的通用 DL 模型。在这项工作中,我们提出了一个对比对抗学习框架,用于主体领域泛化(DG)。具体来说,除了基于 UNet 的去噪模块外,我们还配置了一个对比判别器来检查瓶颈特征中与主体相关的信息,同时对去噪模块进行对抗训练,以强制提取与主体无关的特征。从列表模式数据中抽取的低计数变现作为锚正对,相互靠近,而其他主体作为负样本,分布较远。我们在 97 项 18F-MK6240 tau PET 研究中进行了评估,每项研究有 20 个噪声实现,事件分数为 25%。训练、验证和测试分别使用了 1400、120 和 420 对三维图像卷,与受试者无关。所提出的对比性对抗 DG 比传统的无主题 DG 的 UNet 和基于交叉熵的对抗 DG 显示出更优越的去噪性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信