An augmented reality overlay for navigated prostatectomy using fiducial-free 2D-3D registration.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Johannes Bender, Jeremy Kwe, Benedikt Hoeh, Katharina Boehm, Ivan Platzek, Angelika Borkowetz, Stefanie Speidel, Micha Pfeiffer
{"title":"An augmented reality overlay for navigated prostatectomy using fiducial-free 2D-3D registration.","authors":"Johannes Bender, Jeremy Kwe, Benedikt Hoeh, Katharina Boehm, Ivan Platzek, Angelika Borkowetz, Stefanie Speidel, Micha Pfeiffer","doi":"10.1007/s11548-025-03374-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Markerless navigation in minimally invasive surgery is still an unsolved challenge. Many proposed navigation systems for minimally invasive surgeries rely on stereoscopic images, while in clinical practice oftentimes monocular endoscopes are used. Combined with the lack of automatic video-based navigation systems for prostatectomies, this paper explores methods to tackle both research gaps at the same time for robot-assisted prostatectomies.</p><p><strong>Methods: </strong>In order to realize a semi-automatic augmented reality overlay for navigated prostatectomy, the camera pose w.r.t. the prostate needs to be estimated. We developed a method where visual cues are drawn on top of the organ after an initial manual alignment, simultaneously creating matching landmarks on the 2D and 3D data. Starting from this key frame, the cues are then tracked in the endoscopic video. Both PnPRansac and differentiable rendering are then explored to perform 2D-3D registration for each frame.</p><p><strong>Results: </strong>We performed experiments on synthetic and in vivo data. On synthetic data differentiable rendering can achieve a median target registration error of 6.11 mm. Both PnPRansac and differentiable rendering are feasible methods for 2D-3D registration.</p><p><strong>Conclusion: </strong>We demonstrated a video-based markerless augmented reality overlay for navigated prostatectomy, using visual cues as an anchor.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1265-1272"},"PeriodicalIF":2.3000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167248/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-025-03374-5","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/8 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Markerless navigation in minimally invasive surgery is still an unsolved challenge. Many proposed navigation systems for minimally invasive surgeries rely on stereoscopic images, while in clinical practice oftentimes monocular endoscopes are used. Combined with the lack of automatic video-based navigation systems for prostatectomies, this paper explores methods to tackle both research gaps at the same time for robot-assisted prostatectomies.

Methods: In order to realize a semi-automatic augmented reality overlay for navigated prostatectomy, the camera pose w.r.t. the prostate needs to be estimated. We developed a method where visual cues are drawn on top of the organ after an initial manual alignment, simultaneously creating matching landmarks on the 2D and 3D data. Starting from this key frame, the cues are then tracked in the endoscopic video. Both PnPRansac and differentiable rendering are then explored to perform 2D-3D registration for each frame.

Results: We performed experiments on synthetic and in vivo data. On synthetic data differentiable rendering can achieve a median target registration error of 6.11 mm. Both PnPRansac and differentiable rendering are feasible methods for 2D-3D registration.

Conclusion: We demonstrated a video-based markerless augmented reality overlay for navigated prostatectomy, using visual cues as an anchor.

增强现实覆盖导航前列腺切除术使用无基准2D-3D注册。
目的:无标记导航在微创手术中仍是一个未解决的难题。许多建议的微创手术导航系统依赖于立体图像,而在临床实践中经常使用单眼内窥镜。结合前列腺切除术中基于视频的自动导航系统的缺乏,本文探讨了机器人辅助前列腺切除术同时解决这两个研究空白的方法。方法:为了实现导航前列腺切除术的半自动增强现实叠加,需要估计前列腺周围的相机姿态。我们开发了一种方法,在初始手动对齐后在器官顶部绘制视觉线索,同时在2D和3D数据上创建匹配的地标。从这个关键帧开始,在内窥镜视频中跟踪线索。然后探索PnPRansac和可微分渲染对每帧执行2D-3D配准。结果:我们进行了合成实验和体内实验。在合成数据的可微渲染上,目标配准误差中值为6.11 mm。PnPRansac和可微渲染都是2D-3D配准的可行方法。结论:我们展示了一种基于视频的无标记增强现实覆盖,用于导航前列腺切除术,使用视觉线索作为锚点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信