{"title":"TinyFusionDet:用于边缘3D目标检测的硬件高效LiDAR-Camera融合框架","authors":"Yishi Li;Fanhong Zeng;Rui Lai;Tong Wu;Juntao Guan;Anfu Zhu;Zhangming Zhu","doi":"10.1109/TCSVT.2025.3556711","DOIUrl":null,"url":null,"abstract":"Current LiDAR-Camera fusion methods for 3D object detection achieve considerable accuracy at the immense cost of computation and storage, posing challenges for the deployment at the edge. To address this issue, we propose a lightweight 3D object detection framework, namely TinyFusionDet. Specially, we put forward an ingenious Hybrid Scale Pillar Strategy in LiDAR point cloud feature extraction to efficiently improve the detection accuracy of small objects. Meanwhile, a low cost Cross-Modal Heatmap Attention module is presented to suppress background interference in image features for reducing false positives. Moreover, a Cross-Modal Feature Interaction module is designed to enhance the cross-modal information fusion among channels for further promoting the detection precision. Extensive experiments demonstrated that TinyFusionDet achieves competitive accuracy with the lowest memory consumption and inference latency, making it suitable for hardware constrained edge devices. Furthermore, TinyFusionDet is implemented on a customized FPGA-based prototype system, yielding a record high energy efficiency up to 114.97GOPS/W. To the best of our knowledge, this marks the first real-time LiDAR-Camera fusion detection framework for edge applications.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8819-8834"},"PeriodicalIF":11.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TinyFusionDet: Hardware-Efficient LiDAR-Camera Fusion Framework for 3D Object Detection at Edge\",\"authors\":\"Yishi Li;Fanhong Zeng;Rui Lai;Tong Wu;Juntao Guan;Anfu Zhu;Zhangming Zhu\",\"doi\":\"10.1109/TCSVT.2025.3556711\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current LiDAR-Camera fusion methods for 3D object detection achieve considerable accuracy at the immense cost of computation and storage, posing challenges for the deployment at the edge. To address this issue, we propose a lightweight 3D object detection framework, namely TinyFusionDet. Specially, we put forward an ingenious Hybrid Scale Pillar Strategy in LiDAR point cloud feature extraction to efficiently improve the detection accuracy of small objects. Meanwhile, a low cost Cross-Modal Heatmap Attention module is presented to suppress background interference in image features for reducing false positives. Moreover, a Cross-Modal Feature Interaction module is designed to enhance the cross-modal information fusion among channels for further promoting the detection precision. Extensive experiments demonstrated that TinyFusionDet achieves competitive accuracy with the lowest memory consumption and inference latency, making it suitable for hardware constrained edge devices. Furthermore, TinyFusionDet is implemented on a customized FPGA-based prototype system, yielding a record high energy efficiency up to 114.97GOPS/W. To the best of our knowledge, this marks the first real-time LiDAR-Camera fusion detection framework for edge applications.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 9\",\"pages\":\"8819-8834\"},\"PeriodicalIF\":11.1000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10947105/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10947105/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
TinyFusionDet: Hardware-Efficient LiDAR-Camera Fusion Framework for 3D Object Detection at Edge
Current LiDAR-Camera fusion methods for 3D object detection achieve considerable accuracy at the immense cost of computation and storage, posing challenges for the deployment at the edge. To address this issue, we propose a lightweight 3D object detection framework, namely TinyFusionDet. Specially, we put forward an ingenious Hybrid Scale Pillar Strategy in LiDAR point cloud feature extraction to efficiently improve the detection accuracy of small objects. Meanwhile, a low cost Cross-Modal Heatmap Attention module is presented to suppress background interference in image features for reducing false positives. Moreover, a Cross-Modal Feature Interaction module is designed to enhance the cross-modal information fusion among channels for further promoting the detection precision. Extensive experiments demonstrated that TinyFusionDet achieves competitive accuracy with the lowest memory consumption and inference latency, making it suitable for hardware constrained edge devices. Furthermore, TinyFusionDet is implemented on a customized FPGA-based prototype system, yielding a record high energy efficiency up to 114.97GOPS/W. To the best of our knowledge, this marks the first real-time LiDAR-Camera fusion detection framework for edge applications.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.