{"title":"无人机导航三维大尺度场景单目视觉惯性状态估计","authors":"J. Su, Xutao Li, Yunming Ye, Yan Li","doi":"10.1109/SSRR.2017.8088162","DOIUrl":null,"url":null,"abstract":"Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation\",\"authors\":\"J. Su, Xutao Li, Yunming Ye, Yan Li\",\"doi\":\"10.1109/SSRR.2017.8088162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.\",\"PeriodicalId\":403881,\"journal\":{\"name\":\"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSRR.2017.8088162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSRR.2017.8088162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation
Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.