Jianqiang Xu , Chunying Song , Chao Shi , Huafeng Liu , Qiong Wang
{"title":"不确定性BEV:用于路边3D物体检测的不确定性感知BEV融合","authors":"Jianqiang Xu , Chunying Song , Chao Shi , Huafeng Liu , Qiong Wang","doi":"10.1016/j.imavis.2025.105567","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid development of autonomous driving technology and intelligent transportation systems, multimodal fusion-based Bird’s-Eye-View (BEV) perception has become a key technique for environmental understanding. However, existing methods suffer from feature misalignment caused by calibration errors between different sensors, ultimately limiting the effectiveness of multimodal fusion. In this paper, we propose a robust roadside BEV perception framework, named UncertainBEV. To address feature misalignment caused by projection inaccuracies between LiDAR and camera sensors, we introduce a novel module called UncertainFuser, which models the uncertainty of both camera and LiDAR features to dynamically adjust fusion weights, thereby mitigating feature misalignment. Additionally, we optimize the sparse voxel pooling module and design a multi-head attention mechanism to enhance the quality of BEV features from both modalities. Built upon the CUDA-V2XFusion and BEVFusion frameworks, our proposed UncertainBEV achieves state-of-the-art performance on the DAIR-V2X-I dataset, with 3D mean Average Precision (mAP) improvements of 2.88%, 7.73%, and 3.68% for vehicles, pedestrians, and cyclists, respectively. Our code has been open-sourced at <span><span>UncertainBEV</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"159 ","pages":"Article 105567"},"PeriodicalIF":4.2000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UncertainBEV: Uncertainty-aware BEV fusion for roadside 3D object detection\",\"authors\":\"Jianqiang Xu , Chunying Song , Chao Shi , Huafeng Liu , Qiong Wang\",\"doi\":\"10.1016/j.imavis.2025.105567\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rapid development of autonomous driving technology and intelligent transportation systems, multimodal fusion-based Bird’s-Eye-View (BEV) perception has become a key technique for environmental understanding. However, existing methods suffer from feature misalignment caused by calibration errors between different sensors, ultimately limiting the effectiveness of multimodal fusion. In this paper, we propose a robust roadside BEV perception framework, named UncertainBEV. To address feature misalignment caused by projection inaccuracies between LiDAR and camera sensors, we introduce a novel module called UncertainFuser, which models the uncertainty of both camera and LiDAR features to dynamically adjust fusion weights, thereby mitigating feature misalignment. Additionally, we optimize the sparse voxel pooling module and design a multi-head attention mechanism to enhance the quality of BEV features from both modalities. Built upon the CUDA-V2XFusion and BEVFusion frameworks, our proposed UncertainBEV achieves state-of-the-art performance on the DAIR-V2X-I dataset, with 3D mean Average Precision (mAP) improvements of 2.88%, 7.73%, and 3.68% for vehicles, pedestrians, and cyclists, respectively. Our code has been open-sourced at <span><span>UncertainBEV</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"159 \",\"pages\":\"Article 105567\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-05-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625001556\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625001556","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
UncertainBEV: Uncertainty-aware BEV fusion for roadside 3D object detection
With the rapid development of autonomous driving technology and intelligent transportation systems, multimodal fusion-based Bird’s-Eye-View (BEV) perception has become a key technique for environmental understanding. However, existing methods suffer from feature misalignment caused by calibration errors between different sensors, ultimately limiting the effectiveness of multimodal fusion. In this paper, we propose a robust roadside BEV perception framework, named UncertainBEV. To address feature misalignment caused by projection inaccuracies between LiDAR and camera sensors, we introduce a novel module called UncertainFuser, which models the uncertainty of both camera and LiDAR features to dynamically adjust fusion weights, thereby mitigating feature misalignment. Additionally, we optimize the sparse voxel pooling module and design a multi-head attention mechanism to enhance the quality of BEV features from both modalities. Built upon the CUDA-V2XFusion and BEVFusion frameworks, our proposed UncertainBEV achieves state-of-the-art performance on the DAIR-V2X-I dataset, with 3D mean Average Precision (mAP) improvements of 2.88%, 7.73%, and 3.68% for vehicles, pedestrians, and cyclists, respectively. Our code has been open-sourced at UncertainBEV.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.