{"title":"基于单目视觉方向约束的惯性碰撞无缝辅助","authors":"U. Qayyum, Jonghyuk Kim","doi":"10.1109/IROS.2012.6385830","DOIUrl":null,"url":null,"abstract":"Inertial-SLAM has been actively studied as it can provide all-terrain navigational capability with full six degrees-of-freedom information to autonomous robots. With the recent availability of low-cost inertial and vision sensors, a light-weight and accurate mapping system could be achieved for many robotic tasks such as land/aerial explorations. The key challenge toward this is in the availability of reliable and constant aiding information to correct the inertial system which is intrinsically unstable. The existing approaches have been relying on feature-based maps, which require accurate depth-resolution process to correct the inertial units properly where the aiding rate is highly dependent on the map density. In this work we propose to directly integrate the visual odometry to the inertial system by fusing the scale ambiguous translation vectors as Visual Directional Constraints (VDC) on vehicle motion at high update rates, while the 3D map being still used to constrain the longitudinal drifts but in a relaxed way. In this way, the visual odometry information can be seamlessly fused to inertial system by resolving the scale ambiguity problem between inertial and monocular camera thus achieving a reliable and constant aiding. The proposed approach is evaluated on SLAM benchmark dataset and simulated environment, showing a more stable and consistent performance of monocular inertial-SLAM.","PeriodicalId":6358,"journal":{"name":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"9 1","pages":"4205-4210"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Seamless aiding of inertial-slam using Visual Directional Constraints from a monocular vision\",\"authors\":\"U. Qayyum, Jonghyuk Kim\",\"doi\":\"10.1109/IROS.2012.6385830\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Inertial-SLAM has been actively studied as it can provide all-terrain navigational capability with full six degrees-of-freedom information to autonomous robots. With the recent availability of low-cost inertial and vision sensors, a light-weight and accurate mapping system could be achieved for many robotic tasks such as land/aerial explorations. The key challenge toward this is in the availability of reliable and constant aiding information to correct the inertial system which is intrinsically unstable. The existing approaches have been relying on feature-based maps, which require accurate depth-resolution process to correct the inertial units properly where the aiding rate is highly dependent on the map density. In this work we propose to directly integrate the visual odometry to the inertial system by fusing the scale ambiguous translation vectors as Visual Directional Constraints (VDC) on vehicle motion at high update rates, while the 3D map being still used to constrain the longitudinal drifts but in a relaxed way. In this way, the visual odometry information can be seamlessly fused to inertial system by resolving the scale ambiguity problem between inertial and monocular camera thus achieving a reliable and constant aiding. The proposed approach is evaluated on SLAM benchmark dataset and simulated environment, showing a more stable and consistent performance of monocular inertial-SLAM.\",\"PeriodicalId\":6358,\"journal\":{\"name\":\"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"volume\":\"9 1\",\"pages\":\"4205-4210\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2012.6385830\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2012.6385830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Seamless aiding of inertial-slam using Visual Directional Constraints from a monocular vision
Inertial-SLAM has been actively studied as it can provide all-terrain navigational capability with full six degrees-of-freedom information to autonomous robots. With the recent availability of low-cost inertial and vision sensors, a light-weight and accurate mapping system could be achieved for many robotic tasks such as land/aerial explorations. The key challenge toward this is in the availability of reliable and constant aiding information to correct the inertial system which is intrinsically unstable. The existing approaches have been relying on feature-based maps, which require accurate depth-resolution process to correct the inertial units properly where the aiding rate is highly dependent on the map density. In this work we propose to directly integrate the visual odometry to the inertial system by fusing the scale ambiguous translation vectors as Visual Directional Constraints (VDC) on vehicle motion at high update rates, while the 3D map being still used to constrain the longitudinal drifts but in a relaxed way. In this way, the visual odometry information can be seamlessly fused to inertial system by resolving the scale ambiguity problem between inertial and monocular camera thus achieving a reliable and constant aiding. The proposed approach is evaluated on SLAM benchmark dataset and simulated environment, showing a more stable and consistent performance of monocular inertial-SLAM.