{"title":"Integration of 3D and 2D imaging data for assured navigation in unknown environments","authors":"Evan Dill, M. U. de Haag","doi":"10.1109/PLANS.2010.5507244","DOIUrl":null,"url":null,"abstract":"This paper discusses the development of a novel navigation method that integrates three-dimensional (3D) point cloud data, two-dimensional (2D) digital camera data, and data from an Inertial Measurement Unit (IMU). The target application is to provide an accurate position and attitude determination of unmanned aerial vehicles (UAV) or autonomous ground vehicles (AGV) in any urban or indoor environments, during any scenario. In some urban and indoor environments, GPS signals are attainable and usable for these target applications, but this is not always the case. GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. In these scenarios where GPS is not a viable, or reliable option, a system must be developed that compliments GPS and works in the environments where GPS encounters problems. The proposed algorithm is an effort to show one possible method that a complementary system to GPS could use. It extracts key features such as planar surfaces, lines, corners, and points from both the 3D (point-cloud) and 2D (intensity) imagery. Consecutive observations of corresponding features in the 3D and 2D image frames are then used to compute estimates of position and orientation changes. Since the use of 3D image features for positioning suffers from limited feature observability resulting in deteriorated position accuracies, and the 2D imagery suffers from an unknown depth when estimating the pose from consecutive image frames, it is expected that the integration of both data sets will alleviate the problems with the individual methods resulting in a position and attitude determination procedure with a high level of assurance. An Inertial Measurement Unit (IMU) is used to set up the tracking gates necessary to perform data association of the features in consecutive frames. Finally, the position and orientation change estimates can be used to correct for and mitigate the IMU drift errors.","PeriodicalId":94036,"journal":{"name":"IEEE/ION Position Location and Navigation Symposium : [proceedings]. IEEE/ION Position Location and Navigation Symposium","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2010-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE/ION Position Location and Navigation Symposium : [proceedings]. IEEE/ION Position Location and Navigation Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PLANS.2010.5507244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
This paper discusses the development of a novel navigation method that integrates three-dimensional (3D) point cloud data, two-dimensional (2D) digital camera data, and data from an Inertial Measurement Unit (IMU). The target application is to provide an accurate position and attitude determination of unmanned aerial vehicles (UAV) or autonomous ground vehicles (AGV) in any urban or indoor environments, during any scenario. In some urban and indoor environments, GPS signals are attainable and usable for these target applications, but this is not always the case. GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. In these scenarios where GPS is not a viable, or reliable option, a system must be developed that compliments GPS and works in the environments where GPS encounters problems. The proposed algorithm is an effort to show one possible method that a complementary system to GPS could use. It extracts key features such as planar surfaces, lines, corners, and points from both the 3D (point-cloud) and 2D (intensity) imagery. Consecutive observations of corresponding features in the 3D and 2D image frames are then used to compute estimates of position and orientation changes. Since the use of 3D image features for positioning suffers from limited feature observability resulting in deteriorated position accuracies, and the 2D imagery suffers from an unknown depth when estimating the pose from consecutive image frames, it is expected that the integration of both data sets will alleviate the problems with the individual methods resulting in a position and attitude determination procedure with a high level of assurance. An Inertial Measurement Unit (IMU) is used to set up the tracking gates necessary to perform data association of the features in consecutive frames. Finally, the position and orientation change estimates can be used to correct for and mitigate the IMU drift errors.