IgCONDA-PET: Weakly-supervised PET anomaly detection using implicitly-guided attention-conditional counterfactual diffusion modeling — a multi-center, multi-cancer, and multi-tracer study

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Shadab Ahamed , Arman Rahmim
{"title":"IgCONDA-PET: Weakly-supervised PET anomaly detection using implicitly-guided attention-conditional counterfactual diffusion modeling — a multi-center, multi-cancer, and multi-tracer study","authors":"Shadab Ahamed ,&nbsp;Arman Rahmim","doi":"10.1016/j.compmedimag.2025.102615","DOIUrl":null,"url":null,"abstract":"<div><div>Minimizing the need for pixel-level annotated data to train PET lesion detection and segmentation networks is highly desired and can be transformative, given time and cost constraints associated with expert annotations. Current unsupervised or weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks (GANs) trained only on healthy data. While these approaches reduce annotation dependency, GAN-based methods are notably more challenging to train than non-GAN alternatives (such as autoencoders) due to issues such as the simultaneous optimization of two competing networks, mode collapse, and training instability. In this paper, we present the weakly-supervised <strong>I</strong>mplicitly <strong>g</strong>uided <strong>CO</strong>u<strong>N</strong>terfactual diffusion model for <strong>D</strong>etecting <strong>A</strong>nomalies in <strong>PET</strong> images (IgCONDA-PET). The solution is developed and validated using PET scans from six retrospective cohorts consisting of a total of 2652 cases (multi-cancer, multi-tracer) containing both local and public datasets (spanning multiple centers). The training is conditioned on image class labels (healthy vs. unhealthy) via attention modules, and we employ implicit diffusion guidance. We perform counterfactual generation which facilitates “unhealthy-to-healthy” domain translation by generating a synthetic, healthy version of an unhealthy input image, enabling the detection of anomalies through the calculated differences. The performance of our method was compared against several other deep learning based weakly-supervised or unsupervised methods as well as traditional methods like 41% SUV<span><math><msub><mrow></mrow><mrow><mtext>max</mtext></mrow></msub></math></span> thresholding. We also highlight the importance of incorporating attention modules in our network for the detection of small anomalies. The code is publicly available at: <span><span>https://github.com/ahxmeds/IgCONDA-PET.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102615"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125001247","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Minimizing the need for pixel-level annotated data to train PET lesion detection and segmentation networks is highly desired and can be transformative, given time and cost constraints associated with expert annotations. Current unsupervised or weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks (GANs) trained only on healthy data. While these approaches reduce annotation dependency, GAN-based methods are notably more challenging to train than non-GAN alternatives (such as autoencoders) due to issues such as the simultaneous optimization of two competing networks, mode collapse, and training instability. In this paper, we present the weakly-supervised Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images (IgCONDA-PET). The solution is developed and validated using PET scans from six retrospective cohorts consisting of a total of 2652 cases (multi-cancer, multi-tracer) containing both local and public datasets (spanning multiple centers). The training is conditioned on image class labels (healthy vs. unhealthy) via attention modules, and we employ implicit diffusion guidance. We perform counterfactual generation which facilitates “unhealthy-to-healthy” domain translation by generating a synthetic, healthy version of an unhealthy input image, enabling the detection of anomalies through the calculated differences. The performance of our method was compared against several other deep learning based weakly-supervised or unsupervised methods as well as traditional methods like 41% SUVmax thresholding. We also highlight the importance of incorporating attention modules in our network for the detection of small anomalies. The code is publicly available at: https://github.com/ahxmeds/IgCONDA-PET.git.
IgCONDA-PET:使用隐式引导注意条件反事实扩散模型的弱监督PET异常检测-一项多中心,多癌症和多示踪剂研究
考虑到与专家注释相关的时间和成本限制,最大限度地减少对像素级注释数据的需求来训练PET病变检测和分割网络是非常需要的,并且可能是变革性的。目前的无监督或弱监督异常检测方法依赖于仅在健康数据上训练的自动编码器或生成对抗网络(gan)。虽然这些方法减少了对标注的依赖,但由于两个竞争网络的同时优化、模式崩溃和训练不稳定性等问题,基于gan的方法明显比非gan替代方法(如自动编码器)更具挑战性。在本文中,我们提出了用于PET图像异常检测的弱监督隐式引导反事实扩散模型(IgCONDA-PET)。该解决方案的开发和验证使用了来自六个回顾性队列的PET扫描,该队列由总共2652例(多种癌症,多种示踪剂)组成,包含本地和公共数据集(跨越多个中心)。通过注意模块对图像类别标签(健康与不健康)进行训练,并采用隐式扩散引导。我们执行反事实生成,通过生成不健康输入图像的合成健康版本来促进“不健康到健康”的域转换,从而通过计算的差异检测异常。我们的方法的性能与其他几种基于弱监督或无监督的深度学习方法以及41% SUVmax阈值等传统方法进行了比较。我们还强调了在我们的网络中加入注意力模块以检测小异常的重要性。该代码可在https://github.com/ahxmeds/IgCONDA-PET.git公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信