{"title":"HYPR4D Kernel Method With an Unsupervised 2.5SD+0.5TD Deep Learning Assisted Kernel Matrix","authors":"Ju-Chieh Kevin Cheng;Erik Reimers;Vesna Sossi","doi":"10.1109/TRPMS.2024.3442690","DOIUrl":null,"url":null,"abstract":"We describe a deep learning (DL) assisted HYPR4D kernelized reconstruction which produces low-noise voxel-level time-activity-curves (TACs) while preserving quantification within small structures as well as consistent spatiotemporal patterns/features within measured data. The proposed method consists of the following advantages over other methods: 1) unsupervised single subject network training scheme independent of positron emission tomography (PET) tracers; 2) training data generated on-the-fly during reconstruction; 3) intrinsic spatiotemporal consistency provided by minimizing the \n<inline-formula> <tex-math>$L_{2}$ </tex-math></inline-formula>\n loss using pseudo 4-D (i.e., 2.5 Spatial Dimension + 0.5 Temporal Dimension or 2.5SD+0.5TD) patches between kernelized OSEM subset estimates; and 4) a final tuning step which minimizes over-smoothing from the network output within the kernel matrix. Contrast phantom, human [18F]FDG and [11C]RAC data acquired on GE SIGNA PET/MR were used for evaluations. The proposed DL HYPR4D kernel method outperformed the standard HYPR4D kernel method as well as TOF-OSEM and TOF-BSREM (Q.Clear) in terms contrast recovery versus noise. The proposed final tuning reduced the underestimation bias due to over-smoothing within a 4-mm target structure from ~15% to ~2% while maintaining low-noise voxel-level TACs. In addition, the proposed unsupervised DL assisted reconstruction also outperformed the supervised DL version in terms of bias reduction along the TACs and kinetic model fittings.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"20-28"},"PeriodicalIF":4.6000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10634197/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
We describe a deep learning (DL) assisted HYPR4D kernelized reconstruction which produces low-noise voxel-level time-activity-curves (TACs) while preserving quantification within small structures as well as consistent spatiotemporal patterns/features within measured data. The proposed method consists of the following advantages over other methods: 1) unsupervised single subject network training scheme independent of positron emission tomography (PET) tracers; 2) training data generated on-the-fly during reconstruction; 3) intrinsic spatiotemporal consistency provided by minimizing the
$L_{2}$
loss using pseudo 4-D (i.e., 2.5 Spatial Dimension + 0.5 Temporal Dimension or 2.5SD+0.5TD) patches between kernelized OSEM subset estimates; and 4) a final tuning step which minimizes over-smoothing from the network output within the kernel matrix. Contrast phantom, human [18F]FDG and [11C]RAC data acquired on GE SIGNA PET/MR were used for evaluations. The proposed DL HYPR4D kernel method outperformed the standard HYPR4D kernel method as well as TOF-OSEM and TOF-BSREM (Q.Clear) in terms contrast recovery versus noise. The proposed final tuning reduced the underestimation bias due to over-smoothing within a 4-mm target structure from ~15% to ~2% while maintaining low-noise voxel-level TACs. In addition, the proposed unsupervised DL assisted reconstruction also outperformed the supervised DL version in terms of bias reduction along the TACs and kinetic model fittings.