{"title":"基于像素跟踪和迭代最近点的立体自运动估计","authors":"A. Milella, R. Siegwart","doi":"10.1109/ICVS.2006.56","DOIUrl":null,"url":null,"abstract":"In this paper, we present a stereovision algorithm for real-time 6DoF ego-motion estimation, which integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP) scheme. The proposed method addresses a basic problem of standard ICP, i.e. its inability to perform the segmentation of data points and to deal with large displacements. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually called Visual Odometry. The paper details the various steps of the algorithm and presents the results of experimental tests performed with an allterrain mobile robot, proving the method to be as accurate as effective for autonomous navigation purposes.","PeriodicalId":189284,"journal":{"name":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"136","resultStr":"{\"title\":\"Stereo-Based Ego-Motion Estimation Using Pixel Tracking and Iterative Closest Point\",\"authors\":\"A. Milella, R. Siegwart\",\"doi\":\"10.1109/ICVS.2006.56\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a stereovision algorithm for real-time 6DoF ego-motion estimation, which integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP) scheme. The proposed method addresses a basic problem of standard ICP, i.e. its inability to perform the segmentation of data points and to deal with large displacements. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually called Visual Odometry. The paper details the various steps of the algorithm and presents the results of experimental tests performed with an allterrain mobile robot, proving the method to be as accurate as effective for autonomous navigation purposes.\",\"PeriodicalId\":189284,\"journal\":{\"name\":\"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-01-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"136\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICVS.2006.56\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fourth IEEE International Conference on Computer Vision Systems (ICVS'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVS.2006.56","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Stereo-Based Ego-Motion Estimation Using Pixel Tracking and Iterative Closest Point
In this paper, we present a stereovision algorithm for real-time 6DoF ego-motion estimation, which integrates image intensity information and 3D stereo data in the well-known Iterative Closest Point (ICP) scheme. The proposed method addresses a basic problem of standard ICP, i.e. its inability to perform the segmentation of data points and to deal with large displacements. Neither a-priori knowledge of the motion nor inputs from other sensors are required, while the only assumption is that the scene always contains visually distinctive features which can be tracked over subsequent stereo pairs. This generates what is usually called Visual Odometry. The paper details the various steps of the algorithm and presents the results of experimental tests performed with an allterrain mobile robot, proving the method to be as accurate as effective for autonomous navigation purposes.