基于高精度点云模型的星载相机位姿估计方法

Bian Xiao, Jun Ma, Feng Li, Lei Xin, Bangcheng Zhan
{"title":"基于高精度点云模型的星载相机位姿估计方法","authors":"Bian Xiao, Jun Ma, Feng Li, Lei Xin, Bangcheng Zhan","doi":"10.1109/ICSP48669.2020.9320984","DOIUrl":null,"url":null,"abstract":"Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.","PeriodicalId":237073,"journal":{"name":"2020 15th IEEE International Conference on Signal Processing (ICSP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A spaceborne camera pose estimate method based on high-precision point cloud model\",\"authors\":\"Bian Xiao, Jun Ma, Feng Li, Lei Xin, Bangcheng Zhan\",\"doi\":\"10.1109/ICSP48669.2020.9320984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.\",\"PeriodicalId\":237073,\"journal\":{\"name\":\"2020 15th IEEE International Conference on Signal Processing (ICSP)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 15th IEEE International Conference on Signal Processing (ICSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSP48669.2020.9320984\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 15th IEEE International Conference on Signal Processing (ICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSP48669.2020.9320984","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

获取相机或传感器的位置和方向信息是机器人导航、自动驾驶、数字曲面模型重构等许多领域的关键任务。通过匹配2D图像和相应的场景数字表面模型/点云模型,可以恢复姿态。结合立体摄影测量和大数据处理,可以创建空间精度极高的三维点云模型。目前,利用卫星影像创建的最精确的三维点云模型精度可达到3m@SE90 (SE90误差3米,即球面误差90%的缩写)。本文提出了一种基于高分辨率点云模型与遥感影像融合的星载相机姿态估计新方法。该方法的核心是将高精度的三维点云模型投影到虚拟相机的图像空间中,然后将3D- 2d姿态估计方法转化为2D-2D配准方法。利用两幅图像之间的配准结果估计相机姿态参数。仿真实验验证了该方法的性能。结果表明,采用该方法可以获得较好的相机姿态精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A spaceborne camera pose estimate method based on high-precision point cloud model
Obtaining the position and orientation information of camera or sensor is a key task in many fields such as robot navigation, autonomous driving, DSM (digital surface model) reconstruction, etc. The pose can be recovered by matching a 2D image and a corresponding digital surface model/point cloud model of the scene. A 3D point cloud model of very high spatial accuracy can be created with a combination of stereophotogrammetry and big data processing. So far, the most accurate 3D point cloud model created with satellites imagery can reach the accuracy of 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%). In this paper, a novel method of estimating the pose of spaceborne cameras based on the fusion of high-resolution point cloud models and remote sensing images. The core of our method is to project a high-precision 3D point cloud model into the image space of a virtual camera, then the 3D-2D pose estimation method is transformed into a 2D-2D registration method. The registration results between two images are used to estimate the camera pose parameters. Simulation experiments were carried out to evaluate the performance of our method. The results showed that acceptable accuracy of camera pose can be achieved by using the proposed approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信