{"title":"Medical Multimodal Image Transformation With Modality Code Awareness","authors":"Zhihua Li;Yuxi Jin;Qingneng Li;Zhenxing Huang;Zixiang Chen;Chao Zhou;Na Zhang;Xu Zhang;Wei Fan;Jianmin Yuan;Qiang He;Weiguang Zhang;Dong Liang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3379580","DOIUrl":null,"url":null,"abstract":"In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"8 5","pages":"511-520"},"PeriodicalIF":4.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10477255/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.