Chaochao Zhou, Syed Hasib Akhter Faruqui, Dayeong An, Abhinav Patel, Ramez N Abdalla, Michael C Hurley, Ali Shaibani, Matthew B Potts, Babak S Jahromi, Sameer A Ansari, Donald R Cantrell
{"title":"Single-View Fluoroscopic X-Ray Pose Estimation: A Comparison of Alternative Loss Functions and Volumetric Scene Representations.","authors":"Chaochao Zhou, Syed Hasib Akhter Faruqui, Dayeong An, Abhinav Patel, Ramez N Abdalla, Michael C Hurley, Ali Shaibani, Matthew B Potts, Babak S Jahromi, Sameer A Ansari, Donald R Cantrell","doi":"10.1007/s10278-024-01354-w","DOIUrl":null,"url":null,"abstract":"<p><p>Many tasks performed in image-guided procedures can be cast as pose estimation problems, where specific projections are chosen to reach a target in 3D space. In this study, we construct a framework for fluoroscopic pose estimation and compare alternative loss functions and volumetric scene representations. We first develop a differentiable projection (DiffProj) algorithm for the efficient computation of Digitally Reconstructed Radiographs (DRRs) from either Cone-Beam Computerized Tomography (CBCT) or neural scene representations. We introduce two innovative neural scene representations, Neural Tuned Tomography (NeTT) and masked Neural Radiance Fields (mNeRF). Pose estimation is then performed within the framework by iterative gradient descent using loss functions that quantify the image discrepancy of the synthesized DRR with respect to the ground-truth, target fluoroscopic X-ray image. We compared alternative loss functions and volumetric scene representations for pose estimation using a dataset consisting of 50 cranial tomographic X-ray sequences. We find that Mutual Information significantly outperforms alternative loss functions for pose estimation, avoiding entrapment in local optima. The alternative discrete (CBCT) and neural (NeTT and mNeRF) volumetric scene representations yield comparable performance (3D angle errors, mean ≤ 3.2° and 90% quantile ≤ 3.4°); however, the neural scene representations incur a considerable computational expense to train.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01354-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Many tasks performed in image-guided procedures can be cast as pose estimation problems, where specific projections are chosen to reach a target in 3D space. In this study, we construct a framework for fluoroscopic pose estimation and compare alternative loss functions and volumetric scene representations. We first develop a differentiable projection (DiffProj) algorithm for the efficient computation of Digitally Reconstructed Radiographs (DRRs) from either Cone-Beam Computerized Tomography (CBCT) or neural scene representations. We introduce two innovative neural scene representations, Neural Tuned Tomography (NeTT) and masked Neural Radiance Fields (mNeRF). Pose estimation is then performed within the framework by iterative gradient descent using loss functions that quantify the image discrepancy of the synthesized DRR with respect to the ground-truth, target fluoroscopic X-ray image. We compared alternative loss functions and volumetric scene representations for pose estimation using a dataset consisting of 50 cranial tomographic X-ray sequences. We find that Mutual Information significantly outperforms alternative loss functions for pose estimation, avoiding entrapment in local optima. The alternative discrete (CBCT) and neural (NeTT and mNeRF) volumetric scene representations yield comparable performance (3D angle errors, mean ≤ 3.2° and 90% quantile ≤ 3.4°); however, the neural scene representations incur a considerable computational expense to train.