E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, P. Sayd
{"title":"基于单目视觉的移动机器人SLAM","authors":"E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, P. Sayd","doi":"10.1109/ICPR.2006.810","DOIUrl":null,"url":null,"abstract":"This paper describes a new vision based method for the simultaneous localization and mapping of mobile robots. The only data used is a video input from a moving calibrated monocular camera. From the detection and matching of interest points in images at video rate, robust estimates of the camera poses are computed in real-time and a 3D map of the environment is reconstructed. The computed 3D structure is constantly refined thanks to the introduction of a fast and local bundle adjustment method that makes this approach particularly accurate and reliable. Actually, this method can be seen as a new visual tool that may be used in conjunction with usual systems (GPS, inertia sensors, etc) in SLAM applications","PeriodicalId":236033,"journal":{"name":"18th International Conference on Pattern Recognition (ICPR'06)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"69","resultStr":"{\"title\":\"Monocular Vision Based SLAM for Mobile Robots\",\"authors\":\"E. Mouragnon, M. Lhuillier, M. Dhome, F. Dekeyser, P. Sayd\",\"doi\":\"10.1109/ICPR.2006.810\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes a new vision based method for the simultaneous localization and mapping of mobile robots. The only data used is a video input from a moving calibrated monocular camera. From the detection and matching of interest points in images at video rate, robust estimates of the camera poses are computed in real-time and a 3D map of the environment is reconstructed. The computed 3D structure is constantly refined thanks to the introduction of a fast and local bundle adjustment method that makes this approach particularly accurate and reliable. Actually, this method can be seen as a new visual tool that may be used in conjunction with usual systems (GPS, inertia sensors, etc) in SLAM applications\",\"PeriodicalId\":236033,\"journal\":{\"name\":\"18th International Conference on Pattern Recognition (ICPR'06)\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"69\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"18th International Conference on Pattern Recognition (ICPR'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPR.2006.810\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"18th International Conference on Pattern Recognition (ICPR'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPR.2006.810","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper describes a new vision based method for the simultaneous localization and mapping of mobile robots. The only data used is a video input from a moving calibrated monocular camera. From the detection and matching of interest points in images at video rate, robust estimates of the camera poses are computed in real-time and a 3D map of the environment is reconstructed. The computed 3D structure is constantly refined thanks to the introduction of a fast and local bundle adjustment method that makes this approach particularly accurate and reliable. Actually, this method can be seen as a new visual tool that may be used in conjunction with usual systems (GPS, inertia sensors, etc) in SLAM applications