Saeed Izadi, Isaac Shiri, Carlos F. Uribe, Parham Geramifar, Habib Zaidi, Arman Rahmim, Ghassan Hamarneh
{"title":"通过情境感知深度网络增强全身 PET 图像的直接联合衰减和散射校正","authors":"Saeed Izadi, Isaac Shiri, Carlos F. Uribe, Parham Geramifar, Habib Zaidi, Arman Rahmim, Ghassan Hamarneh","doi":"10.1016/j.zemedi.2024.01.002","DOIUrl":null,"url":null,"abstract":"<p>In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of <span><math><mrow is=\"true\"><mn is=\"true\">14.30</mn><mo is=\"true\">±</mo><mn is=\"true\">3.88</mn></mrow></math></span>% and a relative error of <span><math><mrow is=\"true\"><mo is=\"true\" linebreak=\"badbreak\">-</mo><mn is=\"true\">2.11</mn><mo is=\"true\">%</mo><mo is=\"true\">±</mo><mn is=\"true\">2.73</mn></mrow></math></span>% in whole-body.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks\",\"authors\":\"Saeed Izadi, Isaac Shiri, Carlos F. Uribe, Parham Geramifar, Habib Zaidi, Arman Rahmim, Ghassan Hamarneh\",\"doi\":\"10.1016/j.zemedi.2024.01.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of <span><math><mrow is=\\\"true\\\"><mn is=\\\"true\\\">14.30</mn><mo is=\\\"true\\\">±</mo><mn is=\\\"true\\\">3.88</mn></mrow></math></span>% and a relative error of <span><math><mrow is=\\\"true\\\"><mo is=\\\"true\\\" linebreak=\\\"badbreak\\\">-</mo><mn is=\\\"true\\\">2.11</mn><mo is=\\\"true\\\">%</mo><mo is=\\\"true\\\">±</mo><mn is=\\\"true\\\">2.73</mn></mrow></math></span>% in whole-body.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1016/j.zemedi.2024.01.002\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.zemedi.2024.01.002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
摘要
在正电子发射断层扫描(PET)中,衰减和散射校正是精确定量重建放射性药物分布的必要步骤。受深度学习最新进展的启发,许多基于卷积神经网络的算法已被提出用于自动衰减和散射校正,使其能够应用于无 CT 或无磁共振 PET 扫描仪,从而在存在 CT 相关伪影的情况下提高性能。PET 成像的一个已知特征是不同患者和/或解剖区域的示踪剂摄取量不同。然而,现有的基于深度学习的算法在推理过程中对不同受试者和/或解剖区域使用固定的模型,这可能会导致虚假输出。在这项工作中,我们提出了一种新颖的基于深度学习的框架,用于在推理中没有结构信息的情况下,从非衰减校正图像直接重建衰减和散射校正 PET。为了处理 PET 成像中的受试者间和受试者内摄取量变化,我们提出了一种新型模型,通过根据相邻切片内的上下文一致性调制卷积核来执行受试者和区域特定滤波。这样,情境感知卷积就能指导中间特征的组成,从而有利于回归输入条件和/或特定区域的示踪剂摄取量。我们还利用了一个包含 910 个全身研究数据的大型队列来进行训练和评估,其规模比之前的研究要大一个数量级以上。在实验研究中,定性评估结果表明,我们提出的无 CT 方法能够生成校正后的 PET 图像,这些图像与借助 CT 扫描校正的地面实况图像非常相似。在定量评估方面,我们对 112 名受试者进行了评估,结果表明我们提出的方法在全身的绝对相对误差为 14.30%±3.88%,相对误差为 -2.11%±2.73%。
Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks
In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of % and a relative error of % in whole-body.