腹腔镜手术中人类肝脏的非刚性图像体积配准。

IF 2.3 2区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Quantitative Imaging in Medicine and Surgery Pub Date : 2025-09-01 Epub Date: 2025-08-18 DOI:10.21037/qims-2025-387
Zhenggang Cao, Le Xie, Yuchen Yang
{"title":"腹腔镜手术中人类肝脏的非刚性图像体积配准。","authors":"Zhenggang Cao, Le Xie, Yuchen Yang","doi":"10.21037/qims-2025-387","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The fusion of intraoperative 2D laparoscopic images with preoperative 3D scans offers significant advantages in minimally invasive surgery, such as improved spatial understanding and enhanced navigation. This study aims to enable augmented reality for deformable organs through accurate 2D-3D registration. However, achieving real-time and precise alignment remains a major challenge due to organ deformation, occlusion, and the difficulty of estimating camera parameters from monocular images.</p><p><strong>Methods: </strong>We introduce a non-rigid image-volume registration (NRIVR) framework designed specifically for deformable human organs. Our approach employs a long short-term memory-based camera estimation neural network (LCENN) to predict camera poses directly from 2D anatomical contours extracted from laparoscopic images. By leveraging a differentiable mapping from 2D boundaries to camera parameters, the system enables real-time inference. Non-rigid registration is then performed in 2D space by integrating both the projected mesh and estimated deformation fields, ensuring consistent alignment across views.</p><p><strong>Results: </strong>Our experiments, evaluating the contour mapping neural network on laparoscopic images from cholecystectomy, showed that using an LCENN can efficiently predict the camera pose from 2D boundaries, achieving a minimal rotational error of 0.35±0.44° and a translational error of 0.51±0.31 mm. Consequently, our proposed framework effectively achieved 2D-3D registration on a clinical dataset, with an average target registration error of 2.74±1.51 mm.</p><p><strong>Conclusions: </strong>These results validate the feasibility and effectiveness of the proposed method for real-time 2D-3D registration in laparoscopic surgery, paving the way for enhanced image guidance in clinical workflows.</p>","PeriodicalId":54267,"journal":{"name":"Quantitative Imaging in Medicine and Surgery","volume":"15 9","pages":"8440-8456"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12397628/pdf/","citationCount":"0","resultStr":"{\"title\":\"Non-rigid image-volume registration for human livers in laparoscopic surgery.\",\"authors\":\"Zhenggang Cao, Le Xie, Yuchen Yang\",\"doi\":\"10.21037/qims-2025-387\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The fusion of intraoperative 2D laparoscopic images with preoperative 3D scans offers significant advantages in minimally invasive surgery, such as improved spatial understanding and enhanced navigation. This study aims to enable augmented reality for deformable organs through accurate 2D-3D registration. However, achieving real-time and precise alignment remains a major challenge due to organ deformation, occlusion, and the difficulty of estimating camera parameters from monocular images.</p><p><strong>Methods: </strong>We introduce a non-rigid image-volume registration (NRIVR) framework designed specifically for deformable human organs. Our approach employs a long short-term memory-based camera estimation neural network (LCENN) to predict camera poses directly from 2D anatomical contours extracted from laparoscopic images. By leveraging a differentiable mapping from 2D boundaries to camera parameters, the system enables real-time inference. Non-rigid registration is then performed in 2D space by integrating both the projected mesh and estimated deformation fields, ensuring consistent alignment across views.</p><p><strong>Results: </strong>Our experiments, evaluating the contour mapping neural network on laparoscopic images from cholecystectomy, showed that using an LCENN can efficiently predict the camera pose from 2D boundaries, achieving a minimal rotational error of 0.35±0.44° and a translational error of 0.51±0.31 mm. Consequently, our proposed framework effectively achieved 2D-3D registration on a clinical dataset, with an average target registration error of 2.74±1.51 mm.</p><p><strong>Conclusions: </strong>These results validate the feasibility and effectiveness of the proposed method for real-time 2D-3D registration in laparoscopic surgery, paving the way for enhanced image guidance in clinical workflows.</p>\",\"PeriodicalId\":54267,\"journal\":{\"name\":\"Quantitative Imaging in Medicine and Surgery\",\"volume\":\"15 9\",\"pages\":\"8440-8456\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12397628/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quantitative Imaging in Medicine and Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.21037/qims-2025-387\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/8/18 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantitative Imaging in Medicine and Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.21037/qims-2025-387","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/8/18 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

背景:术中2D腹腔镜图像与术前3D扫描的融合在微创手术中具有显著的优势,如提高空间理解能力和增强导航能力。本研究旨在通过精确的2D-3D注册实现可变形器官的增强现实。然而,由于器官变形、遮挡以及从单眼图像估计相机参数的困难,实现实时和精确对准仍然是一个主要挑战。方法:我们引入了一种专门为可变形人体器官设计的非刚性图像体积配准(NRIVR)框架。我们的方法采用基于长短期记忆的相机估计神经网络(LCENN),直接从从腹腔镜图像中提取的二维解剖轮廓预测相机姿势。通过利用从2D边界到相机参数的可微分映射,该系统可以实现实时推理。然后在二维空间中通过整合投影网格和估计变形场来执行非刚性配准,确保视图之间的一致对齐。结果:我们对腹腔镜胆囊切除术图像的轮廓映射神经网络进行了实验评估,结果表明,使用LCENN可以有效地从2D边界预测相机姿态,实现最小旋转误差0.35±0.44°,平移误差0.51±0.31 mm。因此,我们提出的框架有效地在临床数据集上实现了2D-3D配准,平均目标配准误差为2.74±1.51 mm。结论:这些结果验证了所提出的方法在腹腔镜手术中实时2D-3D配准的可行性和有效性,为增强临床工作流程中的图像引导铺平了道路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Non-rigid image-volume registration for human livers in laparoscopic surgery.

Non-rigid image-volume registration for human livers in laparoscopic surgery.

Non-rigid image-volume registration for human livers in laparoscopic surgery.

Non-rigid image-volume registration for human livers in laparoscopic surgery.

Background: The fusion of intraoperative 2D laparoscopic images with preoperative 3D scans offers significant advantages in minimally invasive surgery, such as improved spatial understanding and enhanced navigation. This study aims to enable augmented reality for deformable organs through accurate 2D-3D registration. However, achieving real-time and precise alignment remains a major challenge due to organ deformation, occlusion, and the difficulty of estimating camera parameters from monocular images.

Methods: We introduce a non-rigid image-volume registration (NRIVR) framework designed specifically for deformable human organs. Our approach employs a long short-term memory-based camera estimation neural network (LCENN) to predict camera poses directly from 2D anatomical contours extracted from laparoscopic images. By leveraging a differentiable mapping from 2D boundaries to camera parameters, the system enables real-time inference. Non-rigid registration is then performed in 2D space by integrating both the projected mesh and estimated deformation fields, ensuring consistent alignment across views.

Results: Our experiments, evaluating the contour mapping neural network on laparoscopic images from cholecystectomy, showed that using an LCENN can efficiently predict the camera pose from 2D boundaries, achieving a minimal rotational error of 0.35±0.44° and a translational error of 0.51±0.31 mm. Consequently, our proposed framework effectively achieved 2D-3D registration on a clinical dataset, with an average target registration error of 2.74±1.51 mm.

Conclusions: These results validate the feasibility and effectiveness of the proposed method for real-time 2D-3D registration in laparoscopic surgery, paving the way for enhanced image guidance in clinical workflows.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Quantitative Imaging in Medicine and Surgery
Quantitative Imaging in Medicine and Surgery Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
4.20
自引率
17.90%
发文量
252
期刊介绍: Information not localized
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信