Yongze Qi, Xin Meng, Haosen Wang, Bo Lu, Sarath Kodagoda, Shifeng Wang
{"title":"An improved object detection network based on LIDAR point cloud and camera","authors":"Yongze Qi, Xin Meng, Haosen Wang, Bo Lu, Sarath Kodagoda, Shifeng Wang","doi":"10.1007/s10946-025-10267-0","DOIUrl":null,"url":null,"abstract":"<div><p>The range of applications for Light Detection and Ranging (LiDAR) has been expanding, especially in object detection. But they usually depend on one modality and cannot extract information from others. LiDAR and camera multimodal fusion combines data from two sources, greatly increasing detection precision. This paper proposes a new network called the Convergent Attention-Enhanced Camera-LiDAR Object Candidates System (CAECs). It is a decision-level fusion architecture for target detection. First, the CAECs network uses an advanced candidate encoding mechanism. This mechanism sifts through and saves prime candidates from both 2D and 3D detectors. It forms a comprehensive feature tensor and avoids missing crucial detections. Second, we use AgileSightNet to improve feature relevance and strengthen important data. AgileSightNet includes layered channel fusion and an attention scheme. The tests on the KITTI benchmark show that our method performs better at detecting pedestrians and cyclists. It improves accuracy by 6.43% and 6.26% compared to the existing 3D multimodal networks. Compared to single-modal 3D networks, our method improves detection accuracy by 13.03% and 5.97%. This shows better precision and robustness in LiDAR point cloud applications.</p></div>","PeriodicalId":663,"journal":{"name":"Journal of Russian Laser Research","volume":"45 6","pages":"619 - 628"},"PeriodicalIF":0.8000,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Russian Laser Research","FirstCategoryId":"101","ListUrlMain":"https://link.springer.com/article/10.1007/s10946-025-10267-0","RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0
Abstract
The range of applications for Light Detection and Ranging (LiDAR) has been expanding, especially in object detection. But they usually depend on one modality and cannot extract information from others. LiDAR and camera multimodal fusion combines data from two sources, greatly increasing detection precision. This paper proposes a new network called the Convergent Attention-Enhanced Camera-LiDAR Object Candidates System (CAECs). It is a decision-level fusion architecture for target detection. First, the CAECs network uses an advanced candidate encoding mechanism. This mechanism sifts through and saves prime candidates from both 2D and 3D detectors. It forms a comprehensive feature tensor and avoids missing crucial detections. Second, we use AgileSightNet to improve feature relevance and strengthen important data. AgileSightNet includes layered channel fusion and an attention scheme. The tests on the KITTI benchmark show that our method performs better at detecting pedestrians and cyclists. It improves accuracy by 6.43% and 6.26% compared to the existing 3D multimodal networks. Compared to single-modal 3D networks, our method improves detection accuracy by 13.03% and 5.97%. This shows better precision and robustness in LiDAR point cloud applications.
期刊介绍:
The journal publishes original, high-quality articles that follow new developments in all areas of laser research, including:
laser physics;
laser interaction with matter;
properties of laser beams;
laser thermonuclear fusion;
laser chemistry;
quantum and nonlinear optics;
optoelectronics;
solid state, gas, liquid, chemical, and semiconductor lasers.