[基于双视角特征融合的二维/三维单椎体脊柱导航注册方法的精度和效率]。

Q3 Medicine
M H Shao, S K Xu, Y E Guo, F Z Lyu, X S Ma, X L Xia, H L Wang, J Y Jiang
{"title":"[基于双视角特征融合的二维/三维单椎体脊柱导航注册方法的精度和效率]。","authors":"M H Shao, S K Xu, Y E Guo, F Z Lyu, X S Ma, X L Xia, H L Wang, J Y Jiang","doi":"10.3760/cma.j.cn112137-20240408-00815","DOIUrl":null,"url":null,"abstract":"<p><p><b>Objective:</b> To investigate the accuracy and efficiency of spine 2D/3D preoperative CT and intraoperative X-ray registration through a framework for spine 2D/3D single-vertebra navigation registration based on the fusion of dual-position image features. <b>Methods:</b> The preoperative CT and intraoperative anteroposterior (AP) and lateral (LAT) X-ray images of 140 lumbar spine patients who visited Huashan Hospital Affiliated to Fudan University from January 2020 to December 2023 were selected. In order to achieve rapid and high-precision single vertebra registration in clinical orthopedic surgery, a designed transformation parameter feature extraction module combined with a lightweight module of channel and spatial attention (CBAM) was used to accurately extract the local single vertebra image transformation information. Subsequently, the fusion regression module was used to complement the features of the anterior posterior (AP) and lateral (LAT) images to improve the accuracy of the registration parameter regression. Two 1×1 convolutions were used to reduce the parameter calculation amount, improve computational efficiency, and accelerate intraoperative registration time. Finally, the regression module outputed the final transformation parameters. Comparative experiments were conducted using traditional iterative methods (Opt-MI, Opt-NCC, Opt-C2F) and existing deep learning methods convolutional neural network (CNN) as control group. The registration accuracy (mRPD), registration time, and registration success rate were compared among the iterative methods. <b>Results:</b> Through experiments on real CT data, the image-guided registration accuracy of the proposed method was verified. The method achieved a registration accuracy of (0.81±0.41) mm in the mRPD metric, a rotational angle error of 0.57°±0.24°, and a translation error of (0.41±0.21) mm. Through experimental comparisons on mainstream models, the selected DenseNet alignment accuracy was significantly better than ResNet as well as VGG (both <i>P</i><0.05). Compared to existing deep learning methods [mRPD: (2.97±0.99) mm, rotational angle error: 2.64°±0.54°, translation error: (2.15±0.41) mm, registration time: (0.03±0.05) seconds], the proposed method significantly improved registration accuracy (all <i>P</i><0.05). The registration success rate reached 97%, with an average single registration time of only (0.04±0.02) seconds. Compared to traditional iterative methods [mRPD: (0.78±0.26) mm, rotational angle error: 0.84°±0.57°, translation error: (1.05±0.28) mm, registration time: (35.5±10.5) seconds], registration efficiency of the proposed method was significantly improved (all <i>P</i><0.05). The dual-position study also compensated for the limitations in the single-view perspective, and significantly outperforms both the front and side single-view perspectives in terms of positional transformation parameter errors (both <i>P</i><0.05). <b>Conclusion:</b> Compared to existing methods, the proposed CT and X-ray registration method significantly reduces registration time while maintaining high registration accuracy, achieving efficient and precise single vertebra registration.</p>","PeriodicalId":24023,"journal":{"name":"Zhonghua yi xue za zhi","volume":"104 37","pages":"3513-3519"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"[Accuracy and efficiency of 2D/3D single-vertebra spine navigation registration method based on dual-view feature fusion].\",\"authors\":\"M H Shao, S K Xu, Y E Guo, F Z Lyu, X S Ma, X L Xia, H L Wang, J Y Jiang\",\"doi\":\"10.3760/cma.j.cn112137-20240408-00815\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><b>Objective:</b> To investigate the accuracy and efficiency of spine 2D/3D preoperative CT and intraoperative X-ray registration through a framework for spine 2D/3D single-vertebra navigation registration based on the fusion of dual-position image features. <b>Methods:</b> The preoperative CT and intraoperative anteroposterior (AP) and lateral (LAT) X-ray images of 140 lumbar spine patients who visited Huashan Hospital Affiliated to Fudan University from January 2020 to December 2023 were selected. In order to achieve rapid and high-precision single vertebra registration in clinical orthopedic surgery, a designed transformation parameter feature extraction module combined with a lightweight module of channel and spatial attention (CBAM) was used to accurately extract the local single vertebra image transformation information. Subsequently, the fusion regression module was used to complement the features of the anterior posterior (AP) and lateral (LAT) images to improve the accuracy of the registration parameter regression. Two 1×1 convolutions were used to reduce the parameter calculation amount, improve computational efficiency, and accelerate intraoperative registration time. Finally, the regression module outputed the final transformation parameters. Comparative experiments were conducted using traditional iterative methods (Opt-MI, Opt-NCC, Opt-C2F) and existing deep learning methods convolutional neural network (CNN) as control group. The registration accuracy (mRPD), registration time, and registration success rate were compared among the iterative methods. <b>Results:</b> Through experiments on real CT data, the image-guided registration accuracy of the proposed method was verified. The method achieved a registration accuracy of (0.81±0.41) mm in the mRPD metric, a rotational angle error of 0.57°±0.24°, and a translation error of (0.41±0.21) mm. Through experimental comparisons on mainstream models, the selected DenseNet alignment accuracy was significantly better than ResNet as well as VGG (both <i>P</i><0.05). Compared to existing deep learning methods [mRPD: (2.97±0.99) mm, rotational angle error: 2.64°±0.54°, translation error: (2.15±0.41) mm, registration time: (0.03±0.05) seconds], the proposed method significantly improved registration accuracy (all <i>P</i><0.05). The registration success rate reached 97%, with an average single registration time of only (0.04±0.02) seconds. Compared to traditional iterative methods [mRPD: (0.78±0.26) mm, rotational angle error: 0.84°±0.57°, translation error: (1.05±0.28) mm, registration time: (35.5±10.5) seconds], registration efficiency of the proposed method was significantly improved (all <i>P</i><0.05). The dual-position study also compensated for the limitations in the single-view perspective, and significantly outperforms both the front and side single-view perspectives in terms of positional transformation parameter errors (both <i>P</i><0.05). <b>Conclusion:</b> Compared to existing methods, the proposed CT and X-ray registration method significantly reduces registration time while maintaining high registration accuracy, achieving efficient and precise single vertebra registration.</p>\",\"PeriodicalId\":24023,\"journal\":{\"name\":\"Zhonghua yi xue za zhi\",\"volume\":\"104 37\",\"pages\":\"3513-3519\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Zhonghua yi xue za zhi\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3760/cma.j.cn112137-20240408-00815\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Zhonghua yi xue za zhi","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3760/cma.j.cn112137-20240408-00815","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

摘要

目的通过基于双位置图像特征融合的脊柱 2D/3D 单椎体导航配准框架,研究脊柱 2D/3D 术前 CT 和术中 X 光配准的准确性和效率。方法:选取2020年1月至2023年12月在复旦大学附属华山医院就诊的140例腰椎病患者的术前CT和术中前后位(AP)、侧位(LAT)X线图像。为了在临床骨科手术中实现快速、高精度的单椎体配准,设计了变换参数特征提取模块,并结合轻量级的通道和空间注意(CBAM)模块,精确提取局部单椎体图像变换信息。随后,融合回归模块用于补充前方后方(AP)和侧方(LAT)图像的特征,以提高配准参数回归的准确性。使用两个 1×1 卷积来减少参数计算量,提高计算效率,加快术中配准时间。最后,回归模块输出最终的变换参数。对比实验以传统的迭代方法(Opt-MI、Opt-NCC、Opt-C2F)和现有的深度学习方法卷积神经网络(CNN)为对照组。比较了各种迭代方法的配准精度(mRPD)、配准时间和配准成功率。结果:通过对真实 CT 数据的实验,验证了所提方法的图像引导配准精度。在 mRPD 指标上,该方法的配准精度为(0.81±0.41)毫米,旋转角度误差为 0.57°±0.24°,平移误差为(0.41±0.21)毫米。通过对主流模型的实验比较,所选 DenseNet 的配准精度明显优于 ResNet 和 VGG(均为 PPPPC结论:与现有方法相比,所提出的 CT 和 X 光配准方法在保持较高配准精度的同时大大缩短了配准时间,实现了高效、精确的单椎体配准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
[Accuracy and efficiency of 2D/3D single-vertebra spine navigation registration method based on dual-view feature fusion].

Objective: To investigate the accuracy and efficiency of spine 2D/3D preoperative CT and intraoperative X-ray registration through a framework for spine 2D/3D single-vertebra navigation registration based on the fusion of dual-position image features. Methods: The preoperative CT and intraoperative anteroposterior (AP) and lateral (LAT) X-ray images of 140 lumbar spine patients who visited Huashan Hospital Affiliated to Fudan University from January 2020 to December 2023 were selected. In order to achieve rapid and high-precision single vertebra registration in clinical orthopedic surgery, a designed transformation parameter feature extraction module combined with a lightweight module of channel and spatial attention (CBAM) was used to accurately extract the local single vertebra image transformation information. Subsequently, the fusion regression module was used to complement the features of the anterior posterior (AP) and lateral (LAT) images to improve the accuracy of the registration parameter regression. Two 1×1 convolutions were used to reduce the parameter calculation amount, improve computational efficiency, and accelerate intraoperative registration time. Finally, the regression module outputed the final transformation parameters. Comparative experiments were conducted using traditional iterative methods (Opt-MI, Opt-NCC, Opt-C2F) and existing deep learning methods convolutional neural network (CNN) as control group. The registration accuracy (mRPD), registration time, and registration success rate were compared among the iterative methods. Results: Through experiments on real CT data, the image-guided registration accuracy of the proposed method was verified. The method achieved a registration accuracy of (0.81±0.41) mm in the mRPD metric, a rotational angle error of 0.57°±0.24°, and a translation error of (0.41±0.21) mm. Through experimental comparisons on mainstream models, the selected DenseNet alignment accuracy was significantly better than ResNet as well as VGG (both P<0.05). Compared to existing deep learning methods [mRPD: (2.97±0.99) mm, rotational angle error: 2.64°±0.54°, translation error: (2.15±0.41) mm, registration time: (0.03±0.05) seconds], the proposed method significantly improved registration accuracy (all P<0.05). The registration success rate reached 97%, with an average single registration time of only (0.04±0.02) seconds. Compared to traditional iterative methods [mRPD: (0.78±0.26) mm, rotational angle error: 0.84°±0.57°, translation error: (1.05±0.28) mm, registration time: (35.5±10.5) seconds], registration efficiency of the proposed method was significantly improved (all P<0.05). The dual-position study also compensated for the limitations in the single-view perspective, and significantly outperforms both the front and side single-view perspectives in terms of positional transformation parameter errors (both P<0.05). Conclusion: Compared to existing methods, the proposed CT and X-ray registration method significantly reduces registration time while maintaining high registration accuracy, achieving efficient and precise single vertebra registration.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Zhonghua yi xue za zhi
Zhonghua yi xue za zhi Medicine-Medicine (all)
CiteScore
0.80
自引率
0.00%
发文量
400
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信