{"title":"Analysis of determining camera position via Karhunen-Loeve transform","authors":"P. Quick, D. Capson","doi":"10.1109/IAI.2000.839577","DOIUrl":null,"url":null,"abstract":"The Karhunen-Loeve transform (KLT) can be used to compress sets of correlated visual data. Human faces and object recognition are popular areas of current research that use KLT-based methods. The KLT can also be used to compress visual data corresponding to a camera moved translationally and/or rotationally relative to a scene. Positioning of a camera relative to a scene can then be derived accurately using KLT feature vectors; this finds application in robotics and autonomous navigation. Various factors affect the accuracy and speed of such position determination including the number of KLT vectors used, the number of images used to perform the KLT, the number of images used in the comparison set and the size of the movement range. This paper investigates the performance of the KLT with a series of experiments determining a camera's rotational position relative to a generic laboratory scene.","PeriodicalId":224112,"journal":{"name":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"4th IEEE Southwest Symposium on Image Analysis and Interpretation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAI.2000.839577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The Karhunen-Loeve transform (KLT) can be used to compress sets of correlated visual data. Human faces and object recognition are popular areas of current research that use KLT-based methods. The KLT can also be used to compress visual data corresponding to a camera moved translationally and/or rotationally relative to a scene. Positioning of a camera relative to a scene can then be derived accurately using KLT feature vectors; this finds application in robotics and autonomous navigation. Various factors affect the accuracy and speed of such position determination including the number of KLT vectors used, the number of images used to perform the KLT, the number of images used in the comparison set and the size of the movement range. This paper investigates the performance of the KLT with a series of experiments determining a camera's rotational position relative to a generic laboratory scene.