{"title":"基于鲁棒不变特征的室内拓扑导航","authors":"Zhe L. Lin, Sungho Kim, In-So Kweon","doi":"10.1109/IROS.2005.1545589","DOIUrl":null,"url":null,"abstract":"In this paper, we present a recognition-based autonomous navigation system for mobile robots. The system is based on our previously proposed robust invariant feature (RIF) detector. This detector extracts highly robust and repeatable features based on the key idea of tracking multi-scale interest points and selecting unique representative local structures with the strongest response in both spatial and scale domains. Weighted Zernike moments are used as the feature descriptor and applied to the place recognition. The navigation system is composed of on-line and off-line two stages. In the off-line learning stage, we train the robot in its workspace by just taking several images of representative places as landmarks. Then, in the on-line navigation stage, the robot recognizes scenes, obtains robust feature correspondences, and navigates the environment autonomously using the iterative pose converging (IPC) algorithm which is based on the idea of the visual servoing technique. The experimental results and the performance evaluation show that the proposed navigation system can achieve excellent performance in complex indoor environments.","PeriodicalId":189219,"journal":{"name":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Recognition-based indoor topological navigation using robust invariant features\",\"authors\":\"Zhe L. Lin, Sungho Kim, In-So Kweon\",\"doi\":\"10.1109/IROS.2005.1545589\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a recognition-based autonomous navigation system for mobile robots. The system is based on our previously proposed robust invariant feature (RIF) detector. This detector extracts highly robust and repeatable features based on the key idea of tracking multi-scale interest points and selecting unique representative local structures with the strongest response in both spatial and scale domains. Weighted Zernike moments are used as the feature descriptor and applied to the place recognition. The navigation system is composed of on-line and off-line two stages. In the off-line learning stage, we train the robot in its workspace by just taking several images of representative places as landmarks. Then, in the on-line navigation stage, the robot recognizes scenes, obtains robust feature correspondences, and navigates the environment autonomously using the iterative pose converging (IPC) algorithm which is based on the idea of the visual servoing technique. The experimental results and the performance evaluation show that the proposed navigation system can achieve excellent performance in complex indoor environments.\",\"PeriodicalId\":189219,\"journal\":{\"name\":\"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IROS.2005.1545589\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2005 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2005.1545589","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recognition-based indoor topological navigation using robust invariant features
In this paper, we present a recognition-based autonomous navigation system for mobile robots. The system is based on our previously proposed robust invariant feature (RIF) detector. This detector extracts highly robust and repeatable features based on the key idea of tracking multi-scale interest points and selecting unique representative local structures with the strongest response in both spatial and scale domains. Weighted Zernike moments are used as the feature descriptor and applied to the place recognition. The navigation system is composed of on-line and off-line two stages. In the off-line learning stage, we train the robot in its workspace by just taking several images of representative places as landmarks. Then, in the on-line navigation stage, the robot recognizes scenes, obtains robust feature correspondences, and navigates the environment autonomously using the iterative pose converging (IPC) algorithm which is based on the idea of the visual servoing technique. The experimental results and the performance evaluation show that the proposed navigation system can achieve excellent performance in complex indoor environments.