Yinlong Zhang, Wei Liang, Yang Li, Haibo An, Jindong Tan
{"title":"Orientation estimation using visual and inertial sensors","authors":"Yinlong Zhang, Wei Liang, Yang Li, Haibo An, Jindong Tan","doi":"10.1109/ICINFA.2015.7279593","DOIUrl":null,"url":null,"abstract":"This paper presents an orientation estimate scheme using monocular camera and inertial measurement units (IMUs). Unlike the traditional wearable orientation estimation methods, our proposed approach combines both of these two modalities in a novel pattern. Firstly, two visual correspondences between consecutive frames are selected that not only meet the requirement of descriptor similarity constraint, but satisfy the locality constraints, which is under the assumption that the correspondence will be taken as an inlier if their nearest-neighbor feature-point counterparts are within the predefined thresholds with respect to the objective feature-point counterpart. Secondly, these two selected correspondences from visual sensor and quaternions from inertial sensor are jointly employed to derive the initial body poses. Thirdly, a coarse-to-fine procedure proceeds in removing visual false matches and estimating body poses iteratively using Expectation Maximization (EM). Ultimately, the optimal orientation estimation is achieved. Experimental results validate that our proposed method is effective and well suited for wearable orientation estimate.","PeriodicalId":186975,"journal":{"name":"2015 IEEE International Conference on Information and Automation","volume":"251 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Information and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICINFA.2015.7279593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper presents an orientation estimate scheme using monocular camera and inertial measurement units (IMUs). Unlike the traditional wearable orientation estimation methods, our proposed approach combines both of these two modalities in a novel pattern. Firstly, two visual correspondences between consecutive frames are selected that not only meet the requirement of descriptor similarity constraint, but satisfy the locality constraints, which is under the assumption that the correspondence will be taken as an inlier if their nearest-neighbor feature-point counterparts are within the predefined thresholds with respect to the objective feature-point counterpart. Secondly, these two selected correspondences from visual sensor and quaternions from inertial sensor are jointly employed to derive the initial body poses. Thirdly, a coarse-to-fine procedure proceeds in removing visual false matches and estimating body poses iteratively using Expectation Maximization (EM). Ultimately, the optimal orientation estimation is achieved. Experimental results validate that our proposed method is effective and well suited for wearable orientation estimate.