Yuxin Xue;Lei Bi;Yige Peng;Michael Fulham;David Dagan Feng;Jinman Kim
{"title":"PET Synthesis via Self-Supervised Adaptive Residual Estimation Generative Adversarial Network","authors":"Yuxin Xue;Lei Bi;Yige Peng;Michael Fulham;David Dagan Feng;Jinman Kim","doi":"10.1109/TRPMS.2023.3339173","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3339173","url":null,"abstract":"Positron emission tomography (PET) is a widely used, highly sensitive molecular imaging in clinical diagnosis. There is interest in reducing the radiation exposure from PET but also maintaining adequate image quality. Recent methods using convolutional neural networks (CNNs) to generate synthesized high-quality PET images from “low-dose” counterparts have been reported to be “state-of-the-art” for low-to-high-image recovery methods. However, these methods are prone to exhibiting discrepancies in texture and structure between synthesized and real images. Furthermore, the distribution shift between low-dose PET and standard PET has not been fully investigated. To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN). We introduce 1) an adaptive residual estimation mapping mechanism, AE-Net, designed to dynamically rectify the preliminary synthesized PET images by taking the residual map between the low-dose PET and synthesized output as the input and 2) a self-supervised pretraining strategy to enhance the feature representation of the coarse generator. Our experiments with a public benchmark dataset of total-body PET images show that SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"426-438"},"PeriodicalIF":4.4,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Si Young Yie;Keon Min Kim;Sangjin Bae;Jae Sung Lee
{"title":"Effects of Loss Functions and Supervision Methods on Total-Body PET Denoising","authors":"Si Young Yie;Keon Min Kim;Sangjin Bae;Jae Sung Lee","doi":"10.1109/TRPMS.2023.3334276","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3334276","url":null,"abstract":"Introduction of the total-body positron emission tomography (TB PET) system is a remarkable advancement in noninvasive imaging, improving annihilation photon detection sensitivity and bringing the quality of positron emission tomography (PET) images one step closer to that of anatomical images. This enables reduced scan times or radiation doses and can ultimately improve other PET images through denoising. This study investigated the effect of loss functions: mean squared error (MSE), Poisson negative log-likelihood derived from the Poisson statistics of radiation activity, and L1 derived from the histogram of count differences between the full and partial scans. Furthermore, the effect of supervision methods, comparing supervised denoising, self-supervised denoising, and interpolation of input and self-supervised denoising based on dependency relations of the partial and full scans are explored. The supervised denoising method using the L1 norm loss function shows high-denoising performance regardless of harsh denoising conditions, and the interpolated self-supervised denoising using MSE loss preserves local features.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"379-390"},"PeriodicalIF":4.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Noise-Aware Network for Low-Count PET Denoising With Varying Count Levels","authors":"Huidong Xie;Qiong Liu;Bo Zhou;Xiongchao Chen;Xueqi Guo;Hanzhong Wang;Biao Li;Axel Rominger;Kuangyu Shi;Chi Liu","doi":"10.1109/TRPMS.2023.3334105","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3334105","url":null,"abstract":"As positron emission tomography (PET) imaging is accompanied by substantial radiation exposure and cancer risk, reducing radiation dose in PET scans is an important topic. However, low-count PET scans often suffer from high-image noise, which can negatively impact image quality and diagnostic performance. Recent advances in deep learning have shown great potential for recovering underlying signal from noisy counterparts. However, neural networks trained on a specific noise level cannot be easily generalized to other noise levels due to different noise amplitude and variances. To obtain optimal denoised results, we may need to train multiple networks using data with different noise levels. But this approach may be infeasible in reality due to limited data availability. Denoising dynamic PET images presents additional challenge due to tracer decay and continuously changing noise levels across dynamic frames. To address these issues, we propose a unified noise-aware network (UNN) that combines multiple subnetworks with varying denoising power to generate optimal denoised results regardless of the input noise levels. Evaluated using large-scale data from two medical centers with different vendors, presented results showed that the UNN can consistently produce promising denoised results regardless of input noise levels, and demonstrate superior performance over networks trained on single noise level data, especially for extremely low-count data.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"366-378"},"PeriodicalIF":4.4,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10323300","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Total-Body Ultralow-Dose PET Reconstruction Method via Image Space Shuffle U-Net and Body Sampling","authors":"Gaoyu Chen;Sheng Liu;Wenxiang Ding;Li Lv;Chen Zhao;Fenghua Weng;Yong Long;Yunlong Zan;Qiu Huang","doi":"10.1109/TRPMS.2023.3333839","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3333839","url":null,"abstract":"Low-dose positron emission tomography (PET) reconstruction algorithms manage to reduce the injected dose and/or scanning time in PET examination while maintaining the image quality, and thus has been extensively studied. In this article, we proposed a novel ultralow-dose reconstruction method for total-body PET. Specifically, we developed a deep learning model named ISS-Unet based on U-Net and introduced 3-D PixelUnshuffle/PixelShuffle pair in image space to reduce the training time and GPU memory. We then introduced two body sampling methods in the training patch preparation step to improve the training efficiency and local metrics. We also reported the misalignment artifacts that were often neglected in 2-D training. The proposed method was evaluated on the MICCAI 2022 Ultralow-Dose PET Imaging Challenge dataset and won the first prize in the first-round competition according to the comprehensive score combining global and local metrics. In this article, we disclosed the implementation details of the proposed method followed by the comparison results with three typical methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 4","pages":"357-365"},"PeriodicalIF":4.4,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10320380","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140342754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep-Learning-Aided Intraframe Motion Correction for Low-Count Dynamic Brain PET","authors":"Erik Reimers;Ju-Chieh Cheng;Vesna Sossi","doi":"10.1109/TRPMS.2023.3333202","DOIUrl":"https://doi.org/10.1109/TRPMS.2023.3333202","url":null,"abstract":"Data-driven intraframe motion correction of a dynamic brain PET scan (with each frame duration on the order of minutes) is often achieved through the co-registration of high-temporal-resolution (e.g., 1-s duration) subframes to estimate subject head motion. However, this conventional method of subframe co-registration may perform poorly during periods of low counts and/or drastic changes in the spatial tracer distribution over time. Here, we propose a deep learning (DL), U-Net-based convolutional neural network model which aids in the PET motion estimation to overcome these limitations. Unlike DL models for PET denoising, a nonstandard 2.5-D DL model was used which transforms the high-temporal-resolution subframes into nonquantitative DL subframes which allow for improved differentiation between noise and structural/functional landmarks and estimate a constant tracer distribution across time. When estimating motion during periods of drastic change in spatial distribution (within the first minute of the scan, ~1-s temporal resolution), the proposed DL method was found to reduce the expected magnitude of error (+/−) in the estimation for an artificially injected motion trace from 16 mm and 7° (conventional method) to 0.7 mm and 0.6° (DL method). During periods of low counts but a relatively constant spatial tracer distribution (60th min of the scan, ~1-s temporal resolution), an expected error was reduced from 0.5 mm and 0.7° (conventional method) to 0.3 mm and 0.4° (DL method). The use of the DL method was found to significantly improve the accuracy of an image-derived input function calculation when motion was present during the first minute of the scan.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"53-63"},"PeriodicalIF":4.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139081230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation","authors":"Yuzhou Zhuang;Hong Liu;Enmin Song;Xiangyang Xu;Yongde Liao;Guanchao Ye;Chih-Cheng Hung","doi":"10.1109/TRPMS.2023.3332619","DOIUrl":"10.1109/TRPMS.2023.3332619","url":null,"abstract":"Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"33-52"},"PeriodicalIF":4.4,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135661085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Detector Placement on Joint Estimation in X-Ray Fluorescence Emission Tomography","authors":"Hadley DeBrosse;Ling Jian Meng;Patrick La Rivière","doi":"10.1109/TRPMS.2023.3332288","DOIUrl":"10.1109/TRPMS.2023.3332288","url":null,"abstract":"Imaging the spatial distribution of low concentrations of metal is a growing problem of interest with applications in medical and material sciences. X-ray fluorescence emission tomography (XFET) is an emerging metal mapping imaging modality with potential sensitivity improvements and practical advantages over other methods. However, XFET detector placement must first be optimized to ensure accurate metal density quantification and adequate spatial resolution. In this work, we first use singular value decomposition of the imaging model and eigendecomposition of the object-specific Fisher information matrix to study how detector arrangement affects spatial resolution and feature preservation. We then perform joint image reconstructions of a numerical gold phantom. For this phantom, we show that two parallel detectors provide metal quantification with similar accuracy to four detectors, despite the resulting anisotropic spatial resolution in the attenuation map estimate. Two orthogonal detectors provide improved spatial resolution along one axis, but underestimate the metal concentration in distant regions. Therefore, this work demonstrates the minor effect of using fewer, but strategically placed, detectors in the case where detector placement is restricted. This work is a critical investigation into the limitations and capabilities of XFET prior to its translation to preclinical and benchtop uses.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"21-32"},"PeriodicalIF":4.4,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135611150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2023 Index IEEE Transactions on Radiation and Plasma Medical Sciences Vol. 7","authors":"","doi":"10.1109/TRPMS.2023.3330365","DOIUrl":"10.1109/TRPMS.2023.3330365","url":null,"abstract":"","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"7 8","pages":"1-20"},"PeriodicalIF":4.4,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10312794","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135515041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang
{"title":"DoseTransfer: A Transformer Embedded Model With Transfer Learning for Radiotherapy Dose Prediction of Cervical Cancer","authors":"Lu Wen;Jianghong Xiao;Chen Zu;Xi Wu;Jiliu Zhou;Xingchen Peng;Yan Wang","doi":"10.1109/TRPMS.2023.3330772","DOIUrl":"10.1109/TRPMS.2023.3330772","url":null,"abstract":"Cervical cancer stands as a prominent female malignancy, posing a serious threat to women’s health. The clinical solution typically involves time-consuming and laborious radiotherapy planning. Although convolutional neural network (CNN)-based models have been investigated to automate the radiotherapy planning by predicting its outcomes, i.e., dose distribution maps, the insufficiency of data in the cervical cancer dataset limits the prediction performance and generalization of models. Additionally, the intrinsic locality of convolution operations also hinders models from capturing dose information at a global range, limiting the prediction accuracy. In this article, we propose a transfer learning framework embedded with transformer, namely, DoseTransfer, to automatically predict the dose distribution for cervical cancer. To address the limited data in the cervical cancer dataset, we leverage highly correlated clinical information from rectum cancer and transfer this knowledge in a two-phase framework. Specifically, the first phase is the pretraining phase which aims to pretrain the model with the rectum cancer dataset and extract prior knowledge from rectum cancer, while the second phase is the transferring phase where the priorly learned knowledge is effectively transferred to cervical cancer and guides the model to achieve better accuracy. Moreover, both phases are embedded with transformers to capture the global dependencies ignored by CNN, learning wider feature representations. Experimental results on the in-house datasets (i.e., rectum cancer dataset and cervical cancer dataset) have demonstrated the effectiveness of the proposed method.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 1","pages":"95-104"},"PeriodicalIF":4.4,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135507701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uconnect: Synergistic Spectral CT Reconstruction With U-Nets Connecting the Energy Bins","authors":"Zhihan Wang;Alexandre Bousse;Franck Vermet;Jacques Froment;Béatrice Vedel;Alessandro Perelli;Jean-Pierre Tasu;Dimitris Visvikis","doi":"10.1109/TRPMS.2023.3330045","DOIUrl":"10.1109/TRPMS.2023.3330045","url":null,"abstract":"Spectral computed tomography (CT) offers the possibility to reconstruct attenuation images at different energy levels, which can be then used for material decomposition. However, traditional methods reconstruct each energy bin individually and are vulnerable to noise. In this article, we propose a novel synergistic method for spectral CT reconstruction, namely, Uconnect. It utilizes trained convolutional neural networks (CNNs) to connect the energy bins to a latent image so that the full binned data is used synergistically. We experiment on two types of low-dose data: 1) simulated and 2) real patient data. Qualitative and quantitative analysis show that our proposed Uconnect outperforms state-of-the-art model-based iterative reconstruction (MBIR) techniques as well as CNN-based denoising.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 2","pages":"222-233"},"PeriodicalIF":4.4,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134982611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}