{"title":"Comparative Study on BEV Vision and LiDAR Point Cloud Data Fusion Methods","authors":"Junyu Zhou","doi":"10.62051/ww28m534","DOIUrl":null,"url":null,"abstract":"With the gradual maturity of autonomous driving technology, efficient fusion and processing of multimodal sensor data has become an important research direction. This study mainly explores the strategy of integrating BEV (Bird's Eye View) vision with LiDAR point cloud data. We evaluated the performance and applicability of each of the three main data fusion methods through in-depth comparison: early fusion, mid-term fusion, and late fusion. First of all, we summarize the working principle and data characteristics of BEV vision and LiDAR, and emphasize their key roles in auto drive system. Then, the theoretical basis and implementation methods of the three fusion strategies were described in detail. The experimental results show that different fusion strategies exhibit their own advantages in different application scenarios and requirements. For example, early fusion performs well in high-precision tasks, but has a high demand for computing resources. And mid-term fusion is more suitable in scenarios with high real-time requirements. Overall, this study provides in-depth insights and practical suggestions on the fusion of BEV vision and LiDAR data in the field of autonomous driving, laying a solid foundation for future research and applications.","PeriodicalId":509968,"journal":{"name":"Transactions on Computer Science and Intelligent Systems Research","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Computer Science and Intelligent Systems Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.62051/ww28m534","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the gradual maturity of autonomous driving technology, efficient fusion and processing of multimodal sensor data has become an important research direction. This study mainly explores the strategy of integrating BEV (Bird's Eye View) vision with LiDAR point cloud data. We evaluated the performance and applicability of each of the three main data fusion methods through in-depth comparison: early fusion, mid-term fusion, and late fusion. First of all, we summarize the working principle and data characteristics of BEV vision and LiDAR, and emphasize their key roles in auto drive system. Then, the theoretical basis and implementation methods of the three fusion strategies were described in detail. The experimental results show that different fusion strategies exhibit their own advantages in different application scenarios and requirements. For example, early fusion performs well in high-precision tasks, but has a high demand for computing resources. And mid-term fusion is more suitable in scenarios with high real-time requirements. Overall, this study provides in-depth insights and practical suggestions on the fusion of BEV vision and LiDAR data in the field of autonomous driving, laying a solid foundation for future research and applications.