地下三维重建中多源点云两阶段随机森林融合

IF 3.7 2区 工程技术 Q2 OPTICS
Haonan Pang, Hongtao Yang, Lili Mu, Tianfeng Wu, Changjiu Huang, Ruilin Fan
{"title":"地下三维重建中多源点云两阶段随机森林融合","authors":"Haonan Pang,&nbsp;Hongtao Yang,&nbsp;Lili Mu,&nbsp;Tianfeng Wu,&nbsp;Changjiu Huang,&nbsp;Ruilin Fan","doi":"10.1016/j.optlaseng.2025.109301","DOIUrl":null,"url":null,"abstract":"<div><div>Camera sensors are highly sensitive to factors such as lighting, dust, fog, and lack of texture in underground environments, introducing measurement noise and significantly affecting 3D reconstruction accuracy. Existing point cloud regression and denoising models often struggle with the high-dimensional and unstructured nature of point clouds, increasing data processing complexity and limiting prediction accuracy. Furthermore, while fusing multi-line LiDAR and camera data can improve reconstruction accuracy, it introduces challenges related to data redundancy and computational burden. This paper proposes a fusion method using single-line LiDAR and depth cameras based on two-stage random forest regression. In the first stage, a dense 3D point cloud is constructed from LiDAR data, which serves as the reference for initial denoising of the depth camera point cloud using random forest regression. The second stage refines the denoising by incorporating spatial coordinates and FPFH descriptors through a second application of random forest regression. Experimental results demonstrate that the proposed method achieved at least a 77.42% improvement in noise suppression across five representative tunnel scenarios. In terms of dimensional accuracy, the maximum absolute error was reduced by 72.97% to 95.86%, while the average absolute error decreased by 71.91% to 96.92%, with final absolute errors remaining below 10 mm. These results confirm the method's strong generalization capability and high-precision performance in challenging underground reconstruction tasks. The associated dataset and source code are publicly available at: <span><span>https://github.com/phn0315/LiDAR-Vision-fusion.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"195 ","pages":"Article 109301"},"PeriodicalIF":3.7000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Two-stage random forest fusion for multi-source point clouds in underground 3D reconstruction\",\"authors\":\"Haonan Pang,&nbsp;Hongtao Yang,&nbsp;Lili Mu,&nbsp;Tianfeng Wu,&nbsp;Changjiu Huang,&nbsp;Ruilin Fan\",\"doi\":\"10.1016/j.optlaseng.2025.109301\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Camera sensors are highly sensitive to factors such as lighting, dust, fog, and lack of texture in underground environments, introducing measurement noise and significantly affecting 3D reconstruction accuracy. Existing point cloud regression and denoising models often struggle with the high-dimensional and unstructured nature of point clouds, increasing data processing complexity and limiting prediction accuracy. Furthermore, while fusing multi-line LiDAR and camera data can improve reconstruction accuracy, it introduces challenges related to data redundancy and computational burden. This paper proposes a fusion method using single-line LiDAR and depth cameras based on two-stage random forest regression. In the first stage, a dense 3D point cloud is constructed from LiDAR data, which serves as the reference for initial denoising of the depth camera point cloud using random forest regression. The second stage refines the denoising by incorporating spatial coordinates and FPFH descriptors through a second application of random forest regression. Experimental results demonstrate that the proposed method achieved at least a 77.42% improvement in noise suppression across five representative tunnel scenarios. In terms of dimensional accuracy, the maximum absolute error was reduced by 72.97% to 95.86%, while the average absolute error decreased by 71.91% to 96.92%, with final absolute errors remaining below 10 mm. These results confirm the method's strong generalization capability and high-precision performance in challenging underground reconstruction tasks. The associated dataset and source code are publicly available at: <span><span>https://github.com/phn0315/LiDAR-Vision-fusion.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"195 \",\"pages\":\"Article 109301\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816625004865\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625004865","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

在地下环境中,相机传感器对光线、灰尘、雾和缺乏纹理等因素高度敏感,会引入测量噪声,严重影响3D重建精度。现有的点云回归和去噪模型经常与点云的高维和非结构化特性作斗争,增加了数据处理的复杂性,限制了预测的准确性。此外,虽然融合多线激光雷达和相机数据可以提高重建精度,但它带来了与数据冗余和计算负担相关的挑战。提出了一种基于两阶段随机森林回归的单线激光雷达与深度相机融合方法。首先,利用激光雷达数据构建密集的三维点云,作为随机森林回归对深度相机点云进行初始去噪的参考。第二阶段通过随机森林回归的第二次应用,通过结合空间坐标和FPFH描述符来细化去噪。实验结果表明,该方法在5种典型隧道场景下的噪声抑制效果至少提高了77.42%。尺寸精度方面,最大绝对误差减小72.97%至95.86%,平均绝对误差减小71.91%至96.92%,最终绝对误差保持在10 mm以下。这些结果证实了该方法在具有挑战性的地下重建任务中具有较强的泛化能力和高精度性能。相关的数据集和源代码可在:https://github.com/phn0315/LiDAR-Vision-fusion.git上公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Two-stage random forest fusion for multi-source point clouds in underground 3D reconstruction
Camera sensors are highly sensitive to factors such as lighting, dust, fog, and lack of texture in underground environments, introducing measurement noise and significantly affecting 3D reconstruction accuracy. Existing point cloud regression and denoising models often struggle with the high-dimensional and unstructured nature of point clouds, increasing data processing complexity and limiting prediction accuracy. Furthermore, while fusing multi-line LiDAR and camera data can improve reconstruction accuracy, it introduces challenges related to data redundancy and computational burden. This paper proposes a fusion method using single-line LiDAR and depth cameras based on two-stage random forest regression. In the first stage, a dense 3D point cloud is constructed from LiDAR data, which serves as the reference for initial denoising of the depth camera point cloud using random forest regression. The second stage refines the denoising by incorporating spatial coordinates and FPFH descriptors through a second application of random forest regression. Experimental results demonstrate that the proposed method achieved at least a 77.42% improvement in noise suppression across five representative tunnel scenarios. In terms of dimensional accuracy, the maximum absolute error was reduced by 72.97% to 95.86%, while the average absolute error decreased by 71.91% to 96.92%, with final absolute errors remaining below 10 mm. These results confirm the method's strong generalization capability and high-precision performance in challenging underground reconstruction tasks. The associated dataset and source code are publicly available at: https://github.com/phn0315/LiDAR-Vision-fusion.git.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Optics and Lasers in Engineering
Optics and Lasers in Engineering 工程技术-光学
CiteScore
8.90
自引率
8.70%
发文量
384
审稿时长
42 days
期刊介绍: Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods. Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following: -Optical Metrology- Optical Methods for 3D visualization and virtual engineering- Optical Techniques for Microsystems- Imaging, Microscopy and Adaptive Optics- Computational Imaging- Laser methods in manufacturing- Integrated optical and photonic sensors- Optics and Photonics in Life Science- Hyperspectral and spectroscopic methods- Infrared and Terahertz techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信