{"title":"GraphFusion: Robust 3D Detection via Cross-Modal Graph and Uncertainty-Aware Bayesian Fusion","authors":"Huishan Wang;Jie Ma;Jianlei Zhang;Fangwei Chen","doi":"10.1109/LSP.2025.3609243","DOIUrl":null,"url":null,"abstract":"Multimodal 3D object detection significantly enhances perception by fusing LiDAR point clouds and RGB images. However, existing methods often fail to adaptively estimate modality confidence under challenging conditions such as heavy occlusion or sparse point clouds, leading to degraded fusion performance. In this letter, we propose GraphFusion, a multimodal framework that integrates cross-modal graph modeling with Bayesian uncertainty-aware fusion for robust 3D object detection. Specifically, a heterogeneous graph driven by geometric and semantic cues aligns 3D points with 2D pixels. A Bayesian attention mechanism then leverages predictive uncertainty to dynamically reweight modalities, prioritizing high-confidence information and enabling noise-resilient and spatially adaptive fusion. The proposed module is highly generalizable and can be seamlessly integrated into existing detectors as a plug-and-play component. Extensive experiments on KITTI and nuScenes demonstrate that GraphFusion achieves significant accuracy improvements with superior robustness and generalization, especially in complex environments.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3645-3649"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11159149/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal 3D object detection significantly enhances perception by fusing LiDAR point clouds and RGB images. However, existing methods often fail to adaptively estimate modality confidence under challenging conditions such as heavy occlusion or sparse point clouds, leading to degraded fusion performance. In this letter, we propose GraphFusion, a multimodal framework that integrates cross-modal graph modeling with Bayesian uncertainty-aware fusion for robust 3D object detection. Specifically, a heterogeneous graph driven by geometric and semantic cues aligns 3D points with 2D pixels. A Bayesian attention mechanism then leverages predictive uncertainty to dynamically reweight modalities, prioritizing high-confidence information and enabling noise-resilient and spatially adaptive fusion. The proposed module is highly generalizable and can be seamlessly integrated into existing detectors as a plug-and-play component. Extensive experiments on KITTI and nuScenes demonstrate that GraphFusion achieves significant accuracy improvements with superior robustness and generalization, especially in complex environments.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.