Bian Xiao, Jun Ma, Feng Li, Lei Xin, Bangcheng Zhan
{"title":"基于高精度点云模型的星载相机位姿估计方法","authors":"Bian Xiao, Jun Ma, Feng Li, Lei Xin, Bangcheng Zhan","doi":"10.1109/ICSP48669.2020.9320984","DOIUrl":null,"url":null,"abstract":"Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.","PeriodicalId":237073,"journal":{"name":"2020 15th IEEE International Conference on Signal Processing (ICSP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A spaceborne camera pose estimate method based on high-precision point cloud model\",\"authors\":\"Bian Xiao, Jun Ma, Feng Li, Lei Xin, Bangcheng Zhan\",\"doi\":\"10.1109/ICSP48669.2020.9320984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.\",\"PeriodicalId\":237073,\"journal\":{\"name\":\"2020 15th IEEE International Conference on Signal Processing (ICSP)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 15th IEEE International Conference on Signal Processing (ICSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSP48669.2020.9320984\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 15th IEEE International Conference on Signal Processing (ICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSP48669.2020.9320984","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A spaceborne camera pose estimate method based on high-precision point cloud model
Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.