{"title":"Recovering the position and orientation of a mobile robot from a single image of identified landmarks","authors":"Wenfei Liu, Yu Zhou","doi":"10.1109/IROS.2007.4399059","DOIUrl":null,"url":null,"abstract":"This paper introduces a novel self-localization algorithm for mobile robots, which recovers the robot position and orientation from a single image of identified landmarks taken by an onboard camera. The visual angle between two landmarks can be derived from their projections in the same image. The distances between the optical center and the landmarks can be calculated from the visual angles and the known landmark positions based on the law of cosine. The robot position can be determined using the principle of trilateration. The robot orientation is then computed from the robot position, landmark positions and their projections. Extensive simulation has been carried out. A comprehensive error analysis provides the insight on how to improve the localization accuracy.","PeriodicalId":227148,"journal":{"name":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","volume":"452 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE/RSJ International Conference on Intelligent Robots and Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IROS.2007.4399059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper introduces a novel self-localization algorithm for mobile robots, which recovers the robot position and orientation from a single image of identified landmarks taken by an onboard camera. The visual angle between two landmarks can be derived from their projections in the same image. The distances between the optical center and the landmarks can be calculated from the visual angles and the known landmark positions based on the law of cosine. The robot position can be determined using the principle of trilateration. The robot orientation is then computed from the robot position, landmark positions and their projections. Extensive simulation has been carried out. A comprehensive error analysis provides the insight on how to improve the localization accuracy.