非正交kV成像引导非共面放射治疗患者位置验证的无数据集隐式神经表征。

Medical physics Pub Date : 2025-05-19 DOI:10.1002/mp.17885
Siqi Ye, Yizheng Chen, Siqi Wang, Lei Xing, Yu Gao
{"title":"非正交kV成像引导非共面放射治疗患者位置验证的无数据集隐式神经表征。","authors":"Siqi Ye, Yizheng Chen, Siqi Wang, Lei Xing, Yu Gao","doi":"10.1002/mp.17885","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration.</p><p><strong>Purpose: </strong>The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans.</p><p><strong>Method: </strong>We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method.</p><p><strong>Results: </strong>In the simulation data experiments, two X-ray projections of a head-and-neck image with <math> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\\nobreakspace {\\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure.</p><p><strong>Conclusion: </strong>We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.</p>","PeriodicalId":94136,"journal":{"name":"Medical physics","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.\",\"authors\":\"Siqi Ye, Yizheng Chen, Siqi Wang, Lei Xing, Yu Gao\",\"doi\":\"10.1002/mp.17885\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration.</p><p><strong>Purpose: </strong>The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans.</p><p><strong>Method: </strong>We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method.</p><p><strong>Results: </strong>In the simulation data experiments, two X-ray projections of a head-and-neck image with <math> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\\\\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\\\\nobreakspace {\\\\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure.</p><p><strong>Conclusion: </strong>We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.</p>\",\"PeriodicalId\":94136,\"journal\":{\"name\":\"Medical physics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/mp.17885\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/mp.17885","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:锥形束CT (CBCT)在放射治疗(RT)中对患者定位和靶标验证至关重要。然而,对于非共面光束,治疗床和机载成像系统之间的潜在碰撞限制了龙门架可以旋转的范围。有限角度测量通常不足以生成用于图像域配准的高质量体积图像,因此限制了CBCT用于位置验证的使用。图像域配准的另一种选择是使用机载kV成像仪获得的一些2D投影与3D规划CT进行配准,以验证患者位置,这被称为2D-3D配准。目的:2D-3D配准包括将3D体转换为一组数字重建x线片(DRRs),预计可与获得的2D投影相媲美。由于DRR生成中的几何建模不准确以及实际获取中的人为因素,生成的DRR与获取的投影之间可能会出现域间隙。我们的目标是提高有限角度CBCT扫描非共面RT中2D-3D配准问题的效率和准确性。方法:我们设计了一个基于隐式神经表示(INR)网络和复合相似性度量的加速、无数据集和患者特异性2D-3D配准框架。INR网络由轻量级的三层多层感知和平均池化组成,计算刚性运动参数,用于将原始3D体转换为运动位置。利用Radon变换和运动位置的成像规范生成精度较高的drr。我们设计了一种由生成的DRRs和获取的投影之间的逐像素强度差和梯度差组成的复合相似性度量,以进一步降低它们的域间隙对配准精度的影响。我们对从瓦里安TrueBeam机器上获得的仿真数据和真实幻像数据进行了评估。与传统的非深度学习配准方法进行了比较,并对复合相似性度量进行了消融研究,以证明所提出方法的有效性。结果:在模拟数据实验中,使用了两个有45°${45}^\circ$差异的头颈部图像的x射线投影进行配准。利用地真运动参数,在4个不同的运动位置对配准结果的精度进行了评价。该方法实现了平移精度亚毫米级,旋转精度亚度级。在幻像实验中,一个头部和颈部的幻像在三个不同的位置被扫描,包括沙发的平移和旋转。我们实现了2 mm $< 2\nobreakspace {\rm mm}$的平移误差和俯仰和滚动的次度精度。与传统配准方法和不含特定成分的复合相似度量方法相比,使用不同数量的不同角度差异的投影进行配准的实验表明,所提方法的精度和鲁棒性都有所提高。结论:我们提出了一种无数据集的轻量级基于inr的配准方法,该方法具有复合相似度度量,用于具有挑战性的CBCT有限角度扫描的2D-3D配准问题。仿真数据和实验数据的综合评价证明了该方法的有效性、准确性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Background: Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration.

Purpose: The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans.

Method: We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method.

Results: In the simulation data experiments, two X-ray projections of a head-and-neck image with 45 ${45}^\circ$ discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of < 2 mm $< 2\nobreakspace {\rm mm}$ and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure.

Conclusion: We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信