Huabin Wang;Zongguang Li;Xianjun Han;Gong Zhang;Qiang Zhang;Dailei Zhang;Fei Liu
{"title":"MAG-Net: A Multiscale Adaptive Generation Network for PET Synthetic CT","authors":"Huabin Wang;Zongguang Li;Xianjun Han;Gong Zhang;Qiang Zhang;Dailei Zhang;Fei Liu","doi":"10.1109/TRPMS.2024.3418831","DOIUrl":null,"url":null,"abstract":"In traditional positron emission computed tomography (PET)/computed tomography (CT) imaging, CT can be used to accurately display lesion anatomical structure. However, CT is not available in single brain PET imaging system. Therefore, this article proposes a novel generation network (MAG-Net) for generating CT images with clear morphological details from PET. The MAG-Net contains three unique features: 1) a parallel multiscale adaptive module is designed to extract robust features of PET, which can improve the quality of the generated images with various resolutions; 2) a binarized contour mask module is applied to constrain the generating process of the fake CT. It can guide the model focusing on generating more CT texture details; and 3) a pixel-level feature encoder is designed to reduce the pixel difference and achieve the accuracy of generated CT by mapping the position information of CT tissues and structures corresponding to bright and dark areas. Experimental results on the SCHERI dataset show that compared with real CT images, structural similarity and PSNR index of generated images reach 0.909 and 26.386. The results of visualization experiments show that the generated CT has clear texture details and realistic morphological structure, which can make the single brain PET imaging system close to the PET/CT imaging system.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 1","pages":"83-94"},"PeriodicalIF":4.6000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10571573/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
In traditional positron emission computed tomography (PET)/computed tomography (CT) imaging, CT can be used to accurately display lesion anatomical structure. However, CT is not available in single brain PET imaging system. Therefore, this article proposes a novel generation network (MAG-Net) for generating CT images with clear morphological details from PET. The MAG-Net contains three unique features: 1) a parallel multiscale adaptive module is designed to extract robust features of PET, which can improve the quality of the generated images with various resolutions; 2) a binarized contour mask module is applied to constrain the generating process of the fake CT. It can guide the model focusing on generating more CT texture details; and 3) a pixel-level feature encoder is designed to reduce the pixel difference and achieve the accuracy of generated CT by mapping the position information of CT tissues and structures corresponding to bright and dark areas. Experimental results on the SCHERI dataset show that compared with real CT images, structural similarity and PSNR index of generated images reach 0.909 and 26.386. The results of visualization experiments show that the generated CT has clear texture details and realistic morphological structure, which can make the single brain PET imaging system close to the PET/CT imaging system.