J J Peek, X Zhang, K Hildebrandt, S A Max, A H Sadeghi, A J J C Bogers, E A F Mahtab
{"title":"A novel 3D image registration technique for augmented reality vision in minimally invasive thoracoscopic pulmonary segmentectomy.","authors":"J J Peek, X Zhang, K Hildebrandt, S A Max, A H Sadeghi, A J J C Bogers, E A F Mahtab","doi":"10.1007/s11548-024-03308-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>In this feasibility study, we aimed to create a dedicated pulmonary augmented reality (AR) workflow to enable a semi-automated intraoperative overlay of the pulmonary anatomy during video-assisted thoracoscopic surgery (VATS) or robot-assisted thoracoscopic surgery (RATS).</p><p><strong>Methods: </strong>Initially, the stereoscopic cameras were calibrated to obtain the intrinsic camera parameters. Intraoperatively, stereoscopic images were recorded and a 3D point cloud was generated from these images. By manually selecting the bifurcation key points, the 3D segmentation (from the diagnostic CT scan) was registered onto the intraoperative 3D point cloud.</p><p><strong>Results: </strong>Image reprojection errors were 0.34 and 0.22 pixels for the VATS and RATS cameras, respectively. We created disparity maps and point clouds for all eight patients. Time for creation of the 3D AR overlay was 5 min. Validation of the point clouds was performed, resulting in a median absolute error of 0.20 mm [IQR 0.10-0.54]. We were able to visualize the AR overlay and identify the arterial bifurcations adequately for five patients. In addition to creating AR overlays of the visible or invisible structures intraoperatively, we successfully visualized branch labels and altered the transparency of the overlays.</p><p><strong>Conclusion: </strong>An algorithm was developed transforming the operative field into a 3D point cloud surface. This allowed for an accurate registration and visualization of preoperative 3D models. Using this system, surgeons can navigate through the patient's anatomy intraoperatively, especially during crucial moments, by visualizing otherwise invisible structures. This proposed registration method lays the groundwork for automated intraoperative AR navigation during minimally invasive pulmonary resections.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-024-03308-7","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: In this feasibility study, we aimed to create a dedicated pulmonary augmented reality (AR) workflow to enable a semi-automated intraoperative overlay of the pulmonary anatomy during video-assisted thoracoscopic surgery (VATS) or robot-assisted thoracoscopic surgery (RATS).
Methods: Initially, the stereoscopic cameras were calibrated to obtain the intrinsic camera parameters. Intraoperatively, stereoscopic images were recorded and a 3D point cloud was generated from these images. By manually selecting the bifurcation key points, the 3D segmentation (from the diagnostic CT scan) was registered onto the intraoperative 3D point cloud.
Results: Image reprojection errors were 0.34 and 0.22 pixels for the VATS and RATS cameras, respectively. We created disparity maps and point clouds for all eight patients. Time for creation of the 3D AR overlay was 5 min. Validation of the point clouds was performed, resulting in a median absolute error of 0.20 mm [IQR 0.10-0.54]. We were able to visualize the AR overlay and identify the arterial bifurcations adequately for five patients. In addition to creating AR overlays of the visible or invisible structures intraoperatively, we successfully visualized branch labels and altered the transparency of the overlays.
Conclusion: An algorithm was developed transforming the operative field into a 3D point cloud surface. This allowed for an accurate registration and visualization of preoperative 3D models. Using this system, surgeons can navigate through the patient's anatomy intraoperatively, especially during crucial moments, by visualizing otherwise invisible structures. This proposed registration method lays the groundwork for automated intraoperative AR navigation during minimally invasive pulmonary resections.
目的:在这项可行性研究中,我们旨在创建一个专用的肺增强现实(AR)工作流程,以便在视频辅助胸腔镜手术(VATS)或机器人辅助胸腔镜手术(RATS)期间实现半自动化的术中肺解剖覆盖。方法:首先对立体相机进行标定,得到相机的固有参数。术中记录立体图像,并根据这些图像生成三维点云。通过手动选择分叉关键点,将诊断CT扫描的三维分割结果注册到术中三维点云上。结果:VATS和RATS相机的图像重投影误差分别为0.34和0.22像素。我们为所有八名患者创建了视差图和点云。创建3D AR覆盖的时间为5分钟。对点云进行验证,结果的中位数绝对误差为0.20 mm [IQR 0.10-0.54]。我们能够可视化AR覆盖并充分识别5例患者的动脉分叉。除了在术中创建可见或不可见结构的AR叠加外,我们还成功地将分支标签可视化并改变了叠加的透明度。结论:提出了一种将手术场转化为三维点云曲面的算法。这允许术前3D模型的准确注册和可视化。使用这个系统,外科医生可以在术中,特别是在关键时刻,通过可视化其他不可见的结构来导航病人的解剖结构。提出的配准方法为微创肺切除术术中自动AR导航奠定了基础。
期刊介绍:
The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.