Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M. Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine
{"title":"Intraoperative Registration by Cross-Modal Inverse Neural Rendering","authors":"Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M. Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine","doi":"arxiv-2409.11983","DOIUrl":null,"url":null,"abstract":"We present in this paper a novel approach for 3D/2D intraoperative\nregistration during neurosurgery via cross-modal inverse neural rendering. Our\napproach separates implicit neural representation into two components, handling\nanatomical structure preoperatively and appearance intraoperatively. This\ndisentanglement is achieved by controlling a Neural Radiance Field's appearance\nwith a multi-style hypernetwork. Once trained, the implicit neural\nrepresentation serves as a differentiable rendering engine, which can be used\nto estimate the surgical camera pose by minimizing the dissimilarity between\nits rendered images and the target intraoperative image. We tested our method\non retrospective patients' data from clinical cases, showing that our method\noutperforms state-of-the-art while meeting current clinical standards for\nregistration. Code and additional resources can be found at\nhttps://maxfehrentz.github.io/style-ngp/.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11983","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present in this paper a novel approach for 3D/2D intraoperative
registration during neurosurgery via cross-modal inverse neural rendering. Our
approach separates implicit neural representation into two components, handling
anatomical structure preoperatively and appearance intraoperatively. This
disentanglement is achieved by controlling a Neural Radiance Field's appearance
with a multi-style hypernetwork. Once trained, the implicit neural
representation serves as a differentiable rendering engine, which can be used
to estimate the surgical camera pose by minimizing the dissimilarity between
its rendered images and the target intraoperative image. We tested our method
on retrospective patients' data from clinical cases, showing that our method
outperforms state-of-the-art while meeting current clinical standards for
registration. Code and additional resources can be found at
https://maxfehrentz.github.io/style-ngp/.