具有模态代码意识的医学多模态图像转换

IF 4.6 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Zhihua Li;Yuxi Jin;Qingneng Li;Zhenxing Huang;Zixiang Chen;Chao Zhou;Na Zhang;Xu Zhang;Wei Fan;Jianmin Yuan;Qiang He;Weiguang Zhang;Dong Liang;Zhanli Hu
{"title":"具有模态代码意识的医学多模态图像转换","authors":"Zhihua Li;Yuxi Jin;Qingneng Li;Zhenxing Huang;Zixiang Chen;Chao Zhou;Na Zhang;Xu Zhang;Wei Fan;Jianmin Yuan;Qiang He;Weiguang Zhang;Dong Liang;Zhanli Hu","doi":"10.1109/TRPMS.2024.3379580","DOIUrl":null,"url":null,"abstract":"In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Medical Multimodal Image Transformation With Modality Code Awareness\",\"authors\":\"Zhihua Li;Yuxi Jin;Qingneng Li;Zhenxing Huang;Zixiang Chen;Chao Zhou;Na Zhang;Xu Zhang;Wei Fan;Jianmin Yuan;Qiang He;Weiguang Zhang;Dong Liang;Zhanli Hu\",\"doi\":\"10.1109/TRPMS.2024.3379580\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.\",\"PeriodicalId\":46807,\"journal\":{\"name\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Radiation and Plasma Medical Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10477255/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10477255/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

在放射治疗的计划阶段,正电子发射断层扫描(PET)图像经常与计算机断层扫描(CT)和磁共振成像(MRI)结合使用,以准确划定治疗靶区。然而,仅为定位目的而获取额外的 CT 或磁共振(MR)图像不仅经济负担重、耗时长,还可能增加患者的辐射量。为了缓解这些问题,我们引入了一种具有动态模式转换能力的深度学习模型。这种方法是通过在解码器模块中加入自适应模态转换层来实现的。自适应模态转换层通过重塑编码器使用开关代码提取的特征的数据分布,有效控制模态转换。利用峰值信噪比、结构相似性指数测量和归一化均方误差等评估指标,对带有参考图像的图像进行模型性能评估。对于无参考图像的结果,则由六位核医学医生根据临床解释进行主观评估。所提出的模型在将非衰减校正 PET 图像转换为用户指定模式(衰减校正 PET、MR 或 CT)方面表现出色,有效简化了放射治疗场景中补充模式图像的获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Medical Multimodal Image Transformation With Modality Code Awareness
In the planning phase of radiation therapy, positron emission tomography (PET) images are frequently integrated with computed tomography (CT) and MRI to accurately delineate the target region for treatment. However, obtaining additional CT or magnetic resonance (MR) images solely for localization purposes proves to be financially burdensome, time-intensive, and may increase patient radiation exposure. To alleviate these issues, a deep learning model with dynamic modality translation capabilities is introduced. This approach is achieved through the incorporation of adaptive modality translation layers within the decoder module. The adaptive modality translation layer effectively governs modality transformation by reshaping the data distribution of features extracted by the encoder using switch codes. The model’s performance is assessed on images with reference images using evaluation metrics, such as peak signal-to-noise ratio, structural similarity index measure, and normalized mean square error. For results without reference images, subjective assessments are provided by six nuclear medicine physicians based on clinical interpretations. The proposed model demonstrates impressive performance in transforming nonattenuation corrected PET images into user-specified modalities (attenuation corrected PET, MR, or CT), effectively streamlining the acquisition of supplemental modality images in radiation therapy scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Radiation and Plasma Medical Sciences
IEEE Transactions on Radiation and Plasma Medical Sciences RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
8.00
自引率
18.20%
发文量
109
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信