利用基于变压器的深度学习模型实现锥形束 CT 到 CT 图像转换,用于前列腺癌自适应放疗。

Yuhei Koike, Hideki Takegawa, Yusuke Anetai, Satoaki Nakamura, Ken Yoshida, Asami Yoshida, Midori Yui, Kazuki Hirota, Kenichi Ueda, Noboru Tanigawa
{"title":"利用基于变压器的深度学习模型实现锥形束 CT 到 CT 图像转换,用于前列腺癌自适应放疗。","authors":"Yuhei Koike, Hideki Takegawa, Yusuke Anetai, Satoaki Nakamura, Ken Yoshida, Asami Yoshida, Midori Yui, Kazuki Hirota, Kenichi Ueda, Noboru Tanigawa","doi":"10.1007/s10278-024-01312-6","DOIUrl":null,"url":null,"abstract":"<p><p>Cone-beam computed tomography (CBCT) is widely utilized in image-guided radiation therapy; however, its image quality is poor compared to planning CT (pCT), thus restricting its utility for adaptive radiotherapy (ART). Our objective was to enhance CBCT image quality utilizing a transformer-based deep learning model, SwinUNETR, which we compared with a conventional convolutional neural network (CNN) model, U-net. This retrospective study involved 260 patients undergoing prostate radiotherapy, with 245 patients used for training and 15 patients reserved as an independent hold-out test dataset. Employing a CycleGAN framework, we generated synthetic CT (sCT) images from CBCT images, employing SwinUNETR and U-net as generators. We evaluated sCT image quality and assessed its dosimetric impact for photon therapy through gamma analysis and dose-volume histogram (DVH) comparisons. The mean absolute error values for the CT numbers, calculated using all voxels within the patient's body contour and taking the pCT images as a reference, were 84.07, 73.49, and 64.69 Hounsfield units for CBCT, U-net, and SwinUNETR images, respectively. Gamma analysis revealed superior agreement between the dose on the pCT images and on the SwinUNETR-based sCT plans compared to those based on U-net. DVH parameters calculated on the SwinUNETR-based sCT deviated by < 1% from those in pCT plans. Our study showed that, compared to the U-net model, SwinUNETR could proficiently generate more precise sCT images from CBCT images, facilitating more accurate dose calculations. This study demonstrates the superiority of transformer-based models over conventional CNN-based approaches for CBCT-to-CT translation, contributing to the advancement of image synthesis techniques in ART.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cone-Beam CT to CT Image Translation Using a Transformer-Based Deep Learning Model for Prostate Cancer Adaptive Radiotherapy.\",\"authors\":\"Yuhei Koike, Hideki Takegawa, Yusuke Anetai, Satoaki Nakamura, Ken Yoshida, Asami Yoshida, Midori Yui, Kazuki Hirota, Kenichi Ueda, Noboru Tanigawa\",\"doi\":\"10.1007/s10278-024-01312-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Cone-beam computed tomography (CBCT) is widely utilized in image-guided radiation therapy; however, its image quality is poor compared to planning CT (pCT), thus restricting its utility for adaptive radiotherapy (ART). Our objective was to enhance CBCT image quality utilizing a transformer-based deep learning model, SwinUNETR, which we compared with a conventional convolutional neural network (CNN) model, U-net. This retrospective study involved 260 patients undergoing prostate radiotherapy, with 245 patients used for training and 15 patients reserved as an independent hold-out test dataset. Employing a CycleGAN framework, we generated synthetic CT (sCT) images from CBCT images, employing SwinUNETR and U-net as generators. We evaluated sCT image quality and assessed its dosimetric impact for photon therapy through gamma analysis and dose-volume histogram (DVH) comparisons. The mean absolute error values for the CT numbers, calculated using all voxels within the patient's body contour and taking the pCT images as a reference, were 84.07, 73.49, and 64.69 Hounsfield units for CBCT, U-net, and SwinUNETR images, respectively. Gamma analysis revealed superior agreement between the dose on the pCT images and on the SwinUNETR-based sCT plans compared to those based on U-net. DVH parameters calculated on the SwinUNETR-based sCT deviated by < 1% from those in pCT plans. Our study showed that, compared to the U-net model, SwinUNETR could proficiently generate more precise sCT images from CBCT images, facilitating more accurate dose calculations. This study demonstrates the superiority of transformer-based models over conventional CNN-based approaches for CBCT-to-CT translation, contributing to the advancement of image synthesis techniques in ART.</p>\",\"PeriodicalId\":516858,\"journal\":{\"name\":\"Journal of imaging informatics in medicine\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of imaging informatics in medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10278-024-01312-6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01312-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

锥形束计算机断层扫描(CBCT)广泛应用于图像引导放射治疗,但与计划 CT(pCT)相比,其图像质量较差,因此限制了其在自适应放射治疗(ART)中的应用。我们的目标是利用基于变压器的深度学习模型 SwinUNETR 来提高 CBCT 的图像质量,并将其与传统的卷积神经网络 (CNN) 模型 U-net 进行比较。这项回顾性研究涉及 260 名接受前列腺放疗的患者,其中 245 名患者用于训练,15 名患者作为独立的保留测试数据集。我们采用 CycleGAN 框架,利用 SwinUNETR 和 U-net 作为生成器,从 CBCT 图像生成合成 CT(sCT)图像。我们通过伽马分析和剂量-容积直方图(DVH)比较评估了 sCT 图像质量,并评估了其对光子治疗的剂量学影响。使用患者身体轮廓内的所有体素并以 pCT 图像为参照计算出的 CT 数字的平均绝对误差值,CBCT、U-net 和 SwinUNETR 图像分别为 84.07、73.49 和 64.69 霍恩斯菲尔德单位。伽马分析显示,与基于 U-net 的计划相比,pCT 图像和基于 SwinUNETR 的 sCT 计划的剂量具有更好的一致性。基于 SwinUNETR 的 sCT 计算出的 DVH 参数偏差为
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cone-Beam CT to CT Image Translation Using a Transformer-Based Deep Learning Model for Prostate Cancer Adaptive Radiotherapy.

Cone-beam computed tomography (CBCT) is widely utilized in image-guided radiation therapy; however, its image quality is poor compared to planning CT (pCT), thus restricting its utility for adaptive radiotherapy (ART). Our objective was to enhance CBCT image quality utilizing a transformer-based deep learning model, SwinUNETR, which we compared with a conventional convolutional neural network (CNN) model, U-net. This retrospective study involved 260 patients undergoing prostate radiotherapy, with 245 patients used for training and 15 patients reserved as an independent hold-out test dataset. Employing a CycleGAN framework, we generated synthetic CT (sCT) images from CBCT images, employing SwinUNETR and U-net as generators. We evaluated sCT image quality and assessed its dosimetric impact for photon therapy through gamma analysis and dose-volume histogram (DVH) comparisons. The mean absolute error values for the CT numbers, calculated using all voxels within the patient's body contour and taking the pCT images as a reference, were 84.07, 73.49, and 64.69 Hounsfield units for CBCT, U-net, and SwinUNETR images, respectively. Gamma analysis revealed superior agreement between the dose on the pCT images and on the SwinUNETR-based sCT plans compared to those based on U-net. DVH parameters calculated on the SwinUNETR-based sCT deviated by < 1% from those in pCT plans. Our study showed that, compared to the U-net model, SwinUNETR could proficiently generate more precise sCT images from CBCT images, facilitating more accurate dose calculations. This study demonstrates the superiority of transformer-based models over conventional CNN-based approaches for CBCT-to-CT translation, contributing to the advancement of image synthesis techniques in ART.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信