{"title":"地下三维重建中多源点云两阶段随机森林融合","authors":"Haonan Pang, Hongtao Yang, Lili Mu, Tianfeng Wu, Changjiu Huang, Ruilin Fan","doi":"10.1016/j.optlaseng.2025.109301","DOIUrl":null,"url":null,"abstract":"<div><div>Camera sensors are highly sensitive to factors such as lighting, dust, fog, and lack of texture in underground environments, introducing measurement noise and significantly affecting 3D reconstruction accuracy. Existing point cloud regression and denoising models often struggle with the high-dimensional and unstructured nature of point clouds, increasing data processing complexity and limiting prediction accuracy. Furthermore, while fusing multi-line LiDAR and camera data can improve reconstruction accuracy, it introduces challenges related to data redundancy and computational burden. This paper proposes a fusion method using single-line LiDAR and depth cameras based on two-stage random forest regression. In the first stage, a dense 3D point cloud is constructed from LiDAR data, which serves as the reference for initial denoising of the depth camera point cloud using random forest regression. The second stage refines the denoising by incorporating spatial coordinates and FPFH descriptors through a second application of random forest regression. Experimental results demonstrate that the proposed method achieved at least a 77.42% improvement in noise suppression across five representative tunnel scenarios. In terms of dimensional accuracy, the maximum absolute error was reduced by 72.97% to 95.86%, while the average absolute error decreased by 71.91% to 96.92%, with final absolute errors remaining below 10 mm. These results confirm the method's strong generalization capability and high-precision performance in challenging underground reconstruction tasks. The associated dataset and source code are publicly available at: <span><span>https://github.com/phn0315/LiDAR-Vision-fusion.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"195 ","pages":"Article 109301"},"PeriodicalIF":3.7000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Two-stage random forest fusion for multi-source point clouds in underground 3D reconstruction\",\"authors\":\"Haonan Pang, Hongtao Yang, Lili Mu, Tianfeng Wu, Changjiu Huang, Ruilin Fan\",\"doi\":\"10.1016/j.optlaseng.2025.109301\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Camera sensors are highly sensitive to factors such as lighting, dust, fog, and lack of texture in underground environments, introducing measurement noise and significantly affecting 3D reconstruction accuracy. Existing point cloud regression and denoising models often struggle with the high-dimensional and unstructured nature of point clouds, increasing data processing complexity and limiting prediction accuracy. Furthermore, while fusing multi-line LiDAR and camera data can improve reconstruction accuracy, it introduces challenges related to data redundancy and computational burden. This paper proposes a fusion method using single-line LiDAR and depth cameras based on two-stage random forest regression. In the first stage, a dense 3D point cloud is constructed from LiDAR data, which serves as the reference for initial denoising of the depth camera point cloud using random forest regression. The second stage refines the denoising by incorporating spatial coordinates and FPFH descriptors through a second application of random forest regression. Experimental results demonstrate that the proposed method achieved at least a 77.42% improvement in noise suppression across five representative tunnel scenarios. In terms of dimensional accuracy, the maximum absolute error was reduced by 72.97% to 95.86%, while the average absolute error decreased by 71.91% to 96.92%, with final absolute errors remaining below 10 mm. These results confirm the method's strong generalization capability and high-precision performance in challenging underground reconstruction tasks. The associated dataset and source code are publicly available at: <span><span>https://github.com/phn0315/LiDAR-Vision-fusion.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49719,\"journal\":{\"name\":\"Optics and Lasers in Engineering\",\"volume\":\"195 \",\"pages\":\"Article 109301\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics and Lasers in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0143816625004865\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics and Lasers in Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0143816625004865","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPTICS","Score":null,"Total":0}
Two-stage random forest fusion for multi-source point clouds in underground 3D reconstruction
Camera sensors are highly sensitive to factors such as lighting, dust, fog, and lack of texture in underground environments, introducing measurement noise and significantly affecting 3D reconstruction accuracy. Existing point cloud regression and denoising models often struggle with the high-dimensional and unstructured nature of point clouds, increasing data processing complexity and limiting prediction accuracy. Furthermore, while fusing multi-line LiDAR and camera data can improve reconstruction accuracy, it introduces challenges related to data redundancy and computational burden. This paper proposes a fusion method using single-line LiDAR and depth cameras based on two-stage random forest regression. In the first stage, a dense 3D point cloud is constructed from LiDAR data, which serves as the reference for initial denoising of the depth camera point cloud using random forest regression. The second stage refines the denoising by incorporating spatial coordinates and FPFH descriptors through a second application of random forest regression. Experimental results demonstrate that the proposed method achieved at least a 77.42% improvement in noise suppression across five representative tunnel scenarios. In terms of dimensional accuracy, the maximum absolute error was reduced by 72.97% to 95.86%, while the average absolute error decreased by 71.91% to 96.92%, with final absolute errors remaining below 10 mm. These results confirm the method's strong generalization capability and high-precision performance in challenging underground reconstruction tasks. The associated dataset and source code are publicly available at: https://github.com/phn0315/LiDAR-Vision-fusion.git.
期刊介绍:
Optics and Lasers in Engineering aims at providing an international forum for the interchange of information on the development of optical techniques and laser technology in engineering. Emphasis is placed on contributions targeted at the practical use of methods and devices, the development and enhancement of solutions and new theoretical concepts for experimental methods.
Optics and Lasers in Engineering reflects the main areas in which optical methods are being used and developed for an engineering environment. Manuscripts should offer clear evidence of novelty and significance. Papers focusing on parameter optimization or computational issues are not suitable. Similarly, papers focussed on an application rather than the optical method fall outside the journal''s scope. The scope of the journal is defined to include the following:
-Optical Metrology-
Optical Methods for 3D visualization and virtual engineering-
Optical Techniques for Microsystems-
Imaging, Microscopy and Adaptive Optics-
Computational Imaging-
Laser methods in manufacturing-
Integrated optical and photonic sensors-
Optics and Photonics in Life Science-
Hyperspectral and spectroscopic methods-
Infrared and Terahertz techniques