Sam Schofield, Andrew Bainbridge-Smith, Richard Green
{"title":"一种改进的半合成方法创建视觉惯性里程计数据集","authors":"Sam Schofield, Andrew Bainbridge-Smith, Richard Green","doi":"10.1016/j.gmod.2023.101172","DOIUrl":null,"url":null,"abstract":"<div><p>Capturing outdoor visual-inertial datasets is a challenging yet vital aspect of developing robust visual-inertial odometry (VIO) algorithms. A significant hurdle is that high-accuracy-ground-truth systems (e.g., motion capture) are not practical for outdoor use. One solution is to use a “semi-synthetic” approach that combines rendered images with real IMU data. This approach can produce sequences containing challenging imagery and accurate ground truth but with less simulated data than a fully synthetic sequence. Existing methods (used by popular tools/datasets) record IMU measurements from a visual-inertial system while measuring its trajectory using motion capture, then rendering images along that trajectory. This work identifies a major flaw in that approach, specifically that using motion capture alone to estimate the pose of the robot/system results in the generation of inconsistent visual-inertial data that is not suitable for evaluating VIO algorithms. However, we show that it is possible to generate high-quality semi-synthetic data for VIO algorithm evaluation. We do so using an open-source full-batch optimisation tool to incorporate both mocap and IMU measurements when estimating the IMU’s trajectory. We demonstrate that this improved trajectory results in better consistency between the IMU data and rendered images and that the resulting data improves VIO trajectory error by 79% compared to existing methods. Furthermore, we examine the effect of visual-inertial data inconsistency (as a result of trajectory noise) on VIO performance to provide a foundation for future work targeting real-time applications.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"126 ","pages":"Article 101172"},"PeriodicalIF":2.5000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An improved semi-synthetic approach for creating visual-inertial odometry datasets\",\"authors\":\"Sam Schofield, Andrew Bainbridge-Smith, Richard Green\",\"doi\":\"10.1016/j.gmod.2023.101172\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Capturing outdoor visual-inertial datasets is a challenging yet vital aspect of developing robust visual-inertial odometry (VIO) algorithms. A significant hurdle is that high-accuracy-ground-truth systems (e.g., motion capture) are not practical for outdoor use. One solution is to use a “semi-synthetic” approach that combines rendered images with real IMU data. This approach can produce sequences containing challenging imagery and accurate ground truth but with less simulated data than a fully synthetic sequence. Existing methods (used by popular tools/datasets) record IMU measurements from a visual-inertial system while measuring its trajectory using motion capture, then rendering images along that trajectory. This work identifies a major flaw in that approach, specifically that using motion capture alone to estimate the pose of the robot/system results in the generation of inconsistent visual-inertial data that is not suitable for evaluating VIO algorithms. However, we show that it is possible to generate high-quality semi-synthetic data for VIO algorithm evaluation. We do so using an open-source full-batch optimisation tool to incorporate both mocap and IMU measurements when estimating the IMU’s trajectory. We demonstrate that this improved trajectory results in better consistency between the IMU data and rendered images and that the resulting data improves VIO trajectory error by 79% compared to existing methods. Furthermore, we examine the effect of visual-inertial data inconsistency (as a result of trajectory noise) on VIO performance to provide a foundation for future work targeting real-time applications.</p></div>\",\"PeriodicalId\":55083,\"journal\":{\"name\":\"Graphical Models\",\"volume\":\"126 \",\"pages\":\"Article 101172\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Graphical Models\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1524070323000036\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Graphical Models","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1524070323000036","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
An improved semi-synthetic approach for creating visual-inertial odometry datasets
Capturing outdoor visual-inertial datasets is a challenging yet vital aspect of developing robust visual-inertial odometry (VIO) algorithms. A significant hurdle is that high-accuracy-ground-truth systems (e.g., motion capture) are not practical for outdoor use. One solution is to use a “semi-synthetic” approach that combines rendered images with real IMU data. This approach can produce sequences containing challenging imagery and accurate ground truth but with less simulated data than a fully synthetic sequence. Existing methods (used by popular tools/datasets) record IMU measurements from a visual-inertial system while measuring its trajectory using motion capture, then rendering images along that trajectory. This work identifies a major flaw in that approach, specifically that using motion capture alone to estimate the pose of the robot/system results in the generation of inconsistent visual-inertial data that is not suitable for evaluating VIO algorithms. However, we show that it is possible to generate high-quality semi-synthetic data for VIO algorithm evaluation. We do so using an open-source full-batch optimisation tool to incorporate both mocap and IMU measurements when estimating the IMU’s trajectory. We demonstrate that this improved trajectory results in better consistency between the IMU data and rendered images and that the resulting data improves VIO trajectory error by 79% compared to existing methods. Furthermore, we examine the effect of visual-inertial data inconsistency (as a result of trajectory noise) on VIO performance to provide a foundation for future work targeting real-time applications.
期刊介绍:
Graphical Models is recognized internationally as a highly rated, top tier journal and is focused on the creation, geometric processing, animation, and visualization of graphical models and on their applications in engineering, science, culture, and entertainment. GMOD provides its readers with thoroughly reviewed and carefully selected papers that disseminate exciting innovations, that teach rigorous theoretical foundations, that propose robust and efficient solutions, or that describe ambitious systems or applications in a variety of topics.
We invite papers in five categories: research (contributions of novel theoretical or practical approaches or solutions), survey (opinionated views of the state-of-the-art and challenges in a specific topic), system (the architecture and implementation details of an innovative architecture for a complete system that supports model/animation design, acquisition, analysis, visualization?), application (description of a novel application of know techniques and evaluation of its impact), or lecture (an elegant and inspiring perspective on previously published results that clarifies them and teaches them in a new way).
GMOD offers its authors an accelerated review, feedback from experts in the field, immediate online publication of accepted papers, no restriction on color and length (when justified by the content) in the online version, and a broad promotion of published papers. A prestigious group of editors selected from among the premier international researchers in their fields oversees the review process.