Nan Luo;Zhexuan Hu;Yuan Ding;Jiaxu Li;Hui Zhao;Gang Liu;Quan Wang
{"title":"DFF-VIO: A General Dynamic Feature Fused Monocular Visual-Inertial Odometry","authors":"Nan Luo;Zhexuan Hu;Yuan Ding;Jiaxu Li;Hui Zhao;Gang Liu;Quan Wang","doi":"10.1109/TCSVT.2024.3482573","DOIUrl":null,"url":null,"abstract":"Integrating dynamic effects has shown its significance in enhancing the accuracy and robustness of Visual-Inertial Odometry (VIO) systems in dynamic scenarios. Existing methods either prune dynamic features or rely heavily on prior semantic knowledge or kinetic models, proved unfriendly to scenes with a multitude of dynamic elements. This work proposes a novel dynamic feature fusion method for monocular VIO, named DFF-VIO, which requires no prior models or scene preference. By combining IMU-predicted poses with visual clues, it initially identifies dynamic features during the tracking stage by constraints of consistency and degree of motion. Then, we innovatively design a Dynamic Transformation Operation (DTO) to separate the effect of dynamic features on multiple frames into pairwise effects and construct a Dynamic Feature Cell (DFC) to preserve the eligible information. Subsequently, we reformulate the VIO nonlinear optimization problem and construct dynamic feature residuals with the transformed DFC as a unit. Based on the proposed inter-frame model of moving features, a so-called motion compensation is developed to resolve the reprojection issue of dynamic features, allowing their effects to be incorporated into the VIO’s tight coupling optimization, thereby realizing robust positioning in dynamic scenarios. We conduct accuracy evaluations on ADVIO and VIODE, degradation tests on EuRoC dataset, as well as ablation studies to highlight the joint optimization of dynamic residuals. Results reveal that DFF-VIO outperforms state-of-the-art methods in pose accuracy and robustness across various dynamic environments.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 2","pages":"1758-1773"},"PeriodicalIF":8.3000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10720882/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Integrating dynamic effects has shown its significance in enhancing the accuracy and robustness of Visual-Inertial Odometry (VIO) systems in dynamic scenarios. Existing methods either prune dynamic features or rely heavily on prior semantic knowledge or kinetic models, proved unfriendly to scenes with a multitude of dynamic elements. This work proposes a novel dynamic feature fusion method for monocular VIO, named DFF-VIO, which requires no prior models or scene preference. By combining IMU-predicted poses with visual clues, it initially identifies dynamic features during the tracking stage by constraints of consistency and degree of motion. Then, we innovatively design a Dynamic Transformation Operation (DTO) to separate the effect of dynamic features on multiple frames into pairwise effects and construct a Dynamic Feature Cell (DFC) to preserve the eligible information. Subsequently, we reformulate the VIO nonlinear optimization problem and construct dynamic feature residuals with the transformed DFC as a unit. Based on the proposed inter-frame model of moving features, a so-called motion compensation is developed to resolve the reprojection issue of dynamic features, allowing their effects to be incorporated into the VIO’s tight coupling optimization, thereby realizing robust positioning in dynamic scenarios. We conduct accuracy evaluations on ADVIO and VIODE, degradation tests on EuRoC dataset, as well as ablation studies to highlight the joint optimization of dynamic residuals. Results reveal that DFF-VIO outperforms state-of-the-art methods in pose accuracy and robustness across various dynamic environments.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.