Deep Learning-powered CT-less Multi-tracer Organ Segmentation from PET Images: A solution for unreliable CT segmentation in PET/CT Imaging

Yazdan Salimi, Zahra Mansouri, Isaac Shiri, Ismini Mainta, Habib Zaidi
{"title":"Deep Learning-powered CT-less Multi-tracer Organ Segmentation from PET Images: A solution for unreliable CT segmentation in PET/CT Imaging","authors":"Yazdan Salimi, Zahra Mansouri, Isaac Shiri, Ismini Mainta, Habib Zaidi","doi":"10.1101/2024.08.27.24312482","DOIUrl":null,"url":null,"abstract":"Introduction: The common approach for organ segmentation in hybrid imaging relies on co-registered CT (CTAC) images. This method, however, presents several limitations in real clinical workflows where mismatch between PET and CT images are very common. Moreover, low-dose CTAC images have poor quality, thus challenging the segmentation task. Recent advances in CT-less PET imaging further highlight the necessity for an effective PET organ segmentation pipeline that does not rely on CT images. Therefore, the goal of this study was to develop a CT-less multi-tracer PET segmentation framework.\nMethods: We collected 2062 PET/CT images from multiple scanners. The patients were injected with either 18F-FDG (1487) or 68Ga-PSMA (575). PET/CT images with any kind of mismatch between PET and CT images were detected through visual assessment and excluded from our study. Multiple organs were delineated on CT components using previously trained in-house developed nnU-Net models. The segmentation masks were resampled to co-registered PET images and used to train four different deep-learning models using different images as input, including non-corrected PET (PET-NC) and attenuation and scatter-corrected PET (PET-ASC) for 18F-FDG (tasks #1 and #2, respectively using 22 organs) and PET-NC and PET-ASC for 68Ga tracers (tasks #3 and #4, respectively, using 15 organs). The models performance was evaluated in terms of Dice coefficient, Jaccard index, and segment volume difference.\nResults: The average Dice coefficient over all organs was 0.81±0.15, 0.82±0.14, 0.77±0.17, and 0.79±0.16 for tasks #1, #2, #3, and #4, respectively. PET-ASC models outperformed PET-NC models (P-value < 0.05). The highest Dice values were achieved for the brain (0.93 to 0.96 in all four tasks), whereas the lowest values were achieved for small organs, such as the adrenal glands. The trained models showed robust performance on dynamic noisy images as well.\nConclusion: Deep learning models allow high performance multi-organ segmentation for two popular PET tracers without the use of CT information. These models may tackle the limitations of using CT segmentation in PET/CT image quantification, kinetic modeling, radiomics analysis, dosimetry, or any other tasks that require organ segmentation masks.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Radiology and Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.27.24312482","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: The common approach for organ segmentation in hybrid imaging relies on co-registered CT (CTAC) images. This method, however, presents several limitations in real clinical workflows where mismatch between PET and CT images are very common. Moreover, low-dose CTAC images have poor quality, thus challenging the segmentation task. Recent advances in CT-less PET imaging further highlight the necessity for an effective PET organ segmentation pipeline that does not rely on CT images. Therefore, the goal of this study was to develop a CT-less multi-tracer PET segmentation framework. Methods: We collected 2062 PET/CT images from multiple scanners. The patients were injected with either 18F-FDG (1487) or 68Ga-PSMA (575). PET/CT images with any kind of mismatch between PET and CT images were detected through visual assessment and excluded from our study. Multiple organs were delineated on CT components using previously trained in-house developed nnU-Net models. The segmentation masks were resampled to co-registered PET images and used to train four different deep-learning models using different images as input, including non-corrected PET (PET-NC) and attenuation and scatter-corrected PET (PET-ASC) for 18F-FDG (tasks #1 and #2, respectively using 22 organs) and PET-NC and PET-ASC for 68Ga tracers (tasks #3 and #4, respectively, using 15 organs). The models performance was evaluated in terms of Dice coefficient, Jaccard index, and segment volume difference. Results: The average Dice coefficient over all organs was 0.81±0.15, 0.82±0.14, 0.77±0.17, and 0.79±0.16 for tasks #1, #2, #3, and #4, respectively. PET-ASC models outperformed PET-NC models (P-value < 0.05). The highest Dice values were achieved for the brain (0.93 to 0.96 in all four tasks), whereas the lowest values were achieved for small organs, such as the adrenal glands. The trained models showed robust performance on dynamic noisy images as well. Conclusion: Deep learning models allow high performance multi-organ segmentation for two popular PET tracers without the use of CT information. These models may tackle the limitations of using CT segmentation in PET/CT image quantification, kinetic modeling, radiomics analysis, dosimetry, or any other tasks that require organ segmentation masks.
深度学习驱动的 PET 图像无 CT 多示踪剂器官分割:PET/CT 成像中不可靠 CT 分割的解决方案
简介混合成像中器官分割的常见方法依赖于共配准 CT(CTAC)图像。然而,这种方法在实际临床工作流程中存在一些局限性,因为 PET 和 CT 图像之间的不匹配非常常见。此外,低剂量 CTAC 图像质量较差,因此对分割任务提出了挑战。无 CT PET 成像的最新进展进一步凸显了不依赖 CT 图像的有效 PET 器官分割管道的必要性。因此,本研究的目标是开发一个无 CT 多示踪 PET 分割框架:我们从多台扫描仪上收集了 2062 张 PET/CT 图像。患者注射了 18F-FDG (1487 例)或 68Ga-PSMA (575 例)。我们通过肉眼评估发现 PET/CT 图像与 CT 图像之间存在任何不匹配,并将其排除在研究之外。使用内部开发的 nnU-Net 模型在 CT 组件上划分多个器官。分割掩膜被重新采样到共同配准的 PET 图像上,并使用不同的图像作为输入来训练四个不同的深度学习模型,包括 18F-FDG 的非校正 PET(PET-NC)和衰减与散射校正 PET(PET-ASC)(任务 #1 和 #2,分别使用 22 个器官),以及 68Ga 示踪剂的 PET-NC 和 PET-ASC(任务 #3 和 #4,分别使用 15 个器官)。根据 Dice 系数、Jaccard 指数和节段体积差异对模型的性能进行了评估:所有器官的平均 Dice 系数分别为 0.81±0.15、0.82±0.14、0.77±0.17 和 0.79±0.16(任务 1、2、3 和 4)。PET-ASC 模型优于 PET-NC 模型(P 值为 0.05)。大脑的 Dice 值最高(在所有四个任务中均为 0.93 至 0.96),而肾上腺等小器官的 Dice 值最低。经过训练的模型在动态噪声图像上也表现出了强劲的性能:深度学习模型可以在不使用 CT 信息的情况下对两种常用 PET 示踪剂进行高性能的多器官分割。这些模型可以解决在 PET/CT 图像量化、动力学建模、放射组学分析、剂量学或任何其他需要器官分割掩码的任务中使用 CT 分割的局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信