Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid
{"title":"高保真三维重建用于内镜鼻窦手术的精确解剖测量。","authors":"Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid","doi":"10.1117/12.3046391","DOIUrl":null,"url":null,"abstract":"<p><p>Achieving an accurate representation of the surgical scene is essential, as it enables precise surgical navigation. Surgeons currently rely on preoperative computed tomography (CT) scans to represent the surgical scene and plan sinus procedures. However, as tissue is resected and manipulated, the anatomy represented in preoperative images becomes increasingly inaccurate and outdated. The endoscopic 3D reconstruction provides an alternative solution to this challenge, for it captures the current surgical scene. Nevertheless, achieving high reconstruction accuracy is crucial in endoscopic sinus surgery (ESS), where tissue margins lie within submillimeter distances to critical anatomy such as the orbits, cranial nerves, carotid arteries, and dura mater. To fulfill the need for a highly accurate intraoperative method of surgical scene modeling in ESS, we propose a system to generate 3D reconstructions of the sinus to garner relevant qualitative and quantitative anatomic information that substantially diverges from preoperative CT images as the surgery progresses. To achieve this, the pipeline of Neural Radiance Fields (NeRF) is expanded to include methods that simulate stereoscopic views using only a monocular endoscope to iteratively refine the depth of reconstructions. The presented workflow provides accurate depth maps, global scaling, and geometric information without camera pose-tracking tools or fiducial markers. Additional methods of point cloud denoising, outlier removal, and dropout patching have been developed and implemented to increase point cloud robustness. This expanded workflow demonstrates the ability to create high-resolution and accurate 3D reconstructions of the surgical scene. Using a series of three cadaveric specimens, measurements of critical anatomy were evaluated with average reconstruction errors for ethmoid length and height being 0.25mm and 0.52mm, respectively.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"13408 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047413/pdf/","citationCount":"0","resultStr":"{\"title\":\"High-Fidelity 3D Reconstruction for Accurate Anatomical Measurements in Endoscopic Sinus Surgery.\",\"authors\":\"Nicole Gunderson, Pengcheng Chen, Jeremy S Ruthberg, Randall A Bly, Eric J Seibel, Waleed M Abuzeid\",\"doi\":\"10.1117/12.3046391\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Achieving an accurate representation of the surgical scene is essential, as it enables precise surgical navigation. Surgeons currently rely on preoperative computed tomography (CT) scans to represent the surgical scene and plan sinus procedures. However, as tissue is resected and manipulated, the anatomy represented in preoperative images becomes increasingly inaccurate and outdated. The endoscopic 3D reconstruction provides an alternative solution to this challenge, for it captures the current surgical scene. Nevertheless, achieving high reconstruction accuracy is crucial in endoscopic sinus surgery (ESS), where tissue margins lie within submillimeter distances to critical anatomy such as the orbits, cranial nerves, carotid arteries, and dura mater. To fulfill the need for a highly accurate intraoperative method of surgical scene modeling in ESS, we propose a system to generate 3D reconstructions of the sinus to garner relevant qualitative and quantitative anatomic information that substantially diverges from preoperative CT images as the surgery progresses. To achieve this, the pipeline of Neural Radiance Fields (NeRF) is expanded to include methods that simulate stereoscopic views using only a monocular endoscope to iteratively refine the depth of reconstructions. The presented workflow provides accurate depth maps, global scaling, and geometric information without camera pose-tracking tools or fiducial markers. Additional methods of point cloud denoising, outlier removal, and dropout patching have been developed and implemented to increase point cloud robustness. This expanded workflow demonstrates the ability to create high-resolution and accurate 3D reconstructions of the surgical scene. Using a series of three cadaveric specimens, measurements of critical anatomy were evaluated with average reconstruction errors for ethmoid length and height being 0.25mm and 0.52mm, respectively.</p>\",\"PeriodicalId\":74505,\"journal\":{\"name\":\"Proceedings of SPIE--the International Society for Optical Engineering\",\"volume\":\"13408 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12047413/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of SPIE--the International Society for Optical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.3046391\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/4/7 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3046391","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/7 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
High-Fidelity 3D Reconstruction for Accurate Anatomical Measurements in Endoscopic Sinus Surgery.
Achieving an accurate representation of the surgical scene is essential, as it enables precise surgical navigation. Surgeons currently rely on preoperative computed tomography (CT) scans to represent the surgical scene and plan sinus procedures. However, as tissue is resected and manipulated, the anatomy represented in preoperative images becomes increasingly inaccurate and outdated. The endoscopic 3D reconstruction provides an alternative solution to this challenge, for it captures the current surgical scene. Nevertheless, achieving high reconstruction accuracy is crucial in endoscopic sinus surgery (ESS), where tissue margins lie within submillimeter distances to critical anatomy such as the orbits, cranial nerves, carotid arteries, and dura mater. To fulfill the need for a highly accurate intraoperative method of surgical scene modeling in ESS, we propose a system to generate 3D reconstructions of the sinus to garner relevant qualitative and quantitative anatomic information that substantially diverges from preoperative CT images as the surgery progresses. To achieve this, the pipeline of Neural Radiance Fields (NeRF) is expanded to include methods that simulate stereoscopic views using only a monocular endoscope to iteratively refine the depth of reconstructions. The presented workflow provides accurate depth maps, global scaling, and geometric information without camera pose-tracking tools or fiducial markers. Additional methods of point cloud denoising, outlier removal, and dropout patching have been developed and implemented to increase point cloud robustness. This expanded workflow demonstrates the ability to create high-resolution and accurate 3D reconstructions of the surgical scene. Using a series of three cadaveric specimens, measurements of critical anatomy were evaluated with average reconstruction errors for ethmoid length and height being 0.25mm and 0.52mm, respectively.