Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time

IF 0.8 Q4 ROBOTICS
Keita Nakamura, Keita Baba, Yutaka Watanobe, Toshihide Hanari, Taku Matsumoto, Takashi Imabuchi, Kuniaki Kawabata
{"title":"Integration of multiple dense point clouds based on estimated parameters in photogrammetry with QR code for reducing computation time","authors":"Keita Nakamura,&nbsp;Keita Baba,&nbsp;Yutaka Watanobe,&nbsp;Toshihide Hanari,&nbsp;Taku Matsumoto,&nbsp;Takashi Imabuchi,&nbsp;Kuniaki Kawabata","doi":"10.1007/s10015-024-00966-3","DOIUrl":null,"url":null,"abstract":"<div><p>This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.</p></div>","PeriodicalId":46050,"journal":{"name":"Artificial Life and Robotics","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10015-024-00966-3.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Life and Robotics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s10015-024-00966-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper describes a method for integrating multiple dense point clouds using a shared landmark to generate a single real-scale integrated result for photogrammetry. It is difficult to integrate high-density point clouds reconstructed by photogrammetry because the scale differs with each photogrammetry. To solve this problem, this study places a QR code of known sizes, which is a shared landmark, in the reconstruction target environment and divides the reconstruction target environment based on the position of the QR code that is placed. Then, photogrammetry is performed for each divided environment to obtain each high-density point cloud. Finally, we propose a method of scaling each high-density point cloud based on the size of the QR code and aligning each high-density point cloud as a single high-point cloud by partial-to-partial registration. To verify the effectiveness of the method, this paper compares the results obtained by applying all images to photogrammetry with those obtained by the proposed method in terms of accuracy and computation time. In this verification, ideal images generated by simulation and images obtained in real environments are applied to photogrammetry. We clarify the relationship between the number of divided environments, the accuracy of the reconstruction result, and the computation time required for the reconstruction.

基于摄影测量中的估计参数,用 QR 码整合多个密集点云,以减少计算时间
本文介绍了一种利用共享地标整合多个高密度点云的方法,以便为摄影测量生成单一真实比例尺的整合结果。由摄影测量重建的高密度点云很难整合,因为每次摄影测量的比例尺都不同。为解决这一问题,本研究在重建目标环境中放置一个已知大小的二维码,作为共享地标,并根据放置二维码的位置划分重建目标环境。然后,对每个划分的环境进行摄影测量,以获得每个高密度点云。最后,我们提出了一种方法,即根据二维码的大小缩放每个高密度点云,并通过部分对部分配准将每个高密度点云配准为一个高点云。为了验证该方法的有效性,本文比较了将所有图像应用于摄影测量所获得的结果与建议方法所获得的结果在精度和计算时间方面的差异。在验证过程中,我们将模拟生成的理想图像和在真实环境中获得的图像应用于摄影测量。我们明确了划分环境的数量、重建结果的准确性和重建所需的计算时间之间的关系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.00
自引率
22.20%
发文量
101
期刊介绍: Artificial Life and Robotics is an international journal publishing original technical papers and authoritative state-of-the-art reviews on the development of new technologies concerning artificial life and robotics, especially computer-based simulation and hardware for the twenty-first century. This journal covers a broad multidisciplinary field, including areas such as artificial brain research, artificial intelligence, artificial life, artificial living, artificial mind research, brain science, chaos, cognitive science, complexity, computer graphics, evolutionary computations, fuzzy control, genetic algorithms, innovative computations, intelligent control and modelling, micromachines, micro-robot world cup soccer tournament, mobile vehicles, neural networks, neurocomputers, neurocomputing technologies and applications, robotics, robus virtual engineering, and virtual reality. Hardware-oriented submissions are particularly welcome. Publishing body: International Symposium on Artificial Life and RoboticsEditor-in-Chiei: Hiroshi Tanaka Hatanaka R Apartment 101, Hatanaka 8-7A, Ooaza-Hatanaka, Oita city, Oita, Japan 870-0856 ©International Symposium on Artificial Life and Robotics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信