{"title":"Visual-Inertial Odometry Priors for Bundle-Adjusting Neural Radiance Fields","authors":"H. Kim, Minkyeong Song, Daekyeong Lee, Pyojin Kim","doi":"10.23919/ICCAS55662.2022.10003959","DOIUrl":null,"url":null,"abstract":"We present bundle-adjusting Neural Radiance Fields (BARF) with motion priors. Neural Radiance Field (NeRF) has opened up tremendous potential for neural volume rendering and 3D scene representations in recognition of their ability to synthesize photo-realistic novel views. BARF mitigates NeRF’s reliance on accurate 6-DoF camera poses, enabling scene learning with inaccurate camera poses. However, initializing estimates far from an optimal solution, such as BARF, can easily fall into local minima. We utilize Visual-Inertial Odometry Motion Priors to the BARF, which jointly optimizes 3D scene representations and camera poses, providing higher accuracy in view synthesis and a more stable motion estimate. The proposed method achieves results that outperform original BARF in real-world data, demonstrating the effectiveness of motion priors to knowledge use.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICCAS55662.2022.10003959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We present bundle-adjusting Neural Radiance Fields (BARF) with motion priors. Neural Radiance Field (NeRF) has opened up tremendous potential for neural volume rendering and 3D scene representations in recognition of their ability to synthesize photo-realistic novel views. BARF mitigates NeRF’s reliance on accurate 6-DoF camera poses, enabling scene learning with inaccurate camera poses. However, initializing estimates far from an optimal solution, such as BARF, can easily fall into local minima. We utilize Visual-Inertial Odometry Motion Priors to the BARF, which jointly optimizes 3D scene representations and camera poses, providing higher accuracy in view synthesis and a more stable motion estimate. The proposed method achieves results that outperform original BARF in real-world data, demonstrating the effectiveness of motion priors to knowledge use.