{"title":"PV-SSD: A Multi-Modal Point Cloud 3D Object Detector Based on Projection Features and Voxel Features","authors":"Yongxin Shao;Aihong Tan;Zhetao Sun;Enhui Zheng;Tianhong Yan;Peng Liao","doi":"10.1109/TETCI.2024.3389710","DOIUrl":null,"url":null,"abstract":"3D object detection using LiDAR is critical for autonomous driving. However, the point cloud data in autonomous driving scenarios is sparse. Converting the sparse point cloud into regular data representations (voxels or projection) often leads to information loss due to downsampling or excessive compression of feature information. This kind of information loss will adversely affect detection accuracy, especially for objects with fewer reflective points like cyclists. This paper proposes a multi-modal point cloud 3D object detector based on projection features and voxel features, which consists of two branches. One, called the voxel branch, is used to extract fine-grained local features. Another, called the projection branch, is used to extract projection features from a bird's-eye view and focus on the correlation of local features in the voxel branch. By feeding voxel features into the projection branch, we can compensate for the information loss in the projection branch while focusing on the correlation between neighboring local features in the voxel features. To achieve comprehensive feature fusion of voxel features and projection features, we propose a multi-modal feature fusion module (MSSFA). To further mitigate the loss of crucial features caused by downsampling, we propose a voxel feature extraction method (VR-VFE), which samples feature points based on their importance for the detection task. To validate the effectiveness of our method, we tested it on the KITTI dataset and ONCE dataset. The experimental results show that our method has achieved significant improvement in the detection accuracy of objects with fewer reflection points like cyclists.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3436-3449"},"PeriodicalIF":5.3000,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10509820/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
3D object detection using LiDAR is critical for autonomous driving. However, the point cloud data in autonomous driving scenarios is sparse. Converting the sparse point cloud into regular data representations (voxels or projection) often leads to information loss due to downsampling or excessive compression of feature information. This kind of information loss will adversely affect detection accuracy, especially for objects with fewer reflective points like cyclists. This paper proposes a multi-modal point cloud 3D object detector based on projection features and voxel features, which consists of two branches. One, called the voxel branch, is used to extract fine-grained local features. Another, called the projection branch, is used to extract projection features from a bird's-eye view and focus on the correlation of local features in the voxel branch. By feeding voxel features into the projection branch, we can compensate for the information loss in the projection branch while focusing on the correlation between neighboring local features in the voxel features. To achieve comprehensive feature fusion of voxel features and projection features, we propose a multi-modal feature fusion module (MSSFA). To further mitigate the loss of crucial features caused by downsampling, we propose a voxel feature extraction method (VR-VFE), which samples feature points based on their importance for the detection task. To validate the effectiveness of our method, we tested it on the KITTI dataset and ONCE dataset. The experimental results show that our method has achieved significant improvement in the detection accuracy of objects with fewer reflection points like cyclists.
使用激光雷达进行 3D 物体检测对自动驾驶至关重要。然而,自动驾驶场景中的点云数据是稀疏的。将稀疏点云转换为常规数据表示(体素或投影)通常会导致信息丢失,原因是对特征信息进行了下采样或过度压缩。这种信息损失会对检测精度产生不利影响,特别是对于像骑自行车者这样反射点较少的物体。本文提出了一种基于投影特征和体素特征的多模态点云三维物体检测器,它由两个分支组成。一个分支称为体素分支,用于提取细粒度的局部特征。另一个分支称为投影分支,用于从鸟瞰图中提取投影特征,并关注体素分支中局部特征的相关性。通过将体素特征输入投影分支,我们可以弥补投影分支的信息损失,同时关注体素特征中相邻局部特征之间的相关性。为了实现体素特征和投影特征的全面特征融合,我们提出了多模态特征融合模块(MSSFA)。为了进一步减少下采样造成的关键特征损失,我们提出了一种体素特征提取方法(VR-VFE),该方法根据特征点对检测任务的重要性对其进行采样。为了验证我们方法的有效性,我们在 KITTI 数据集和 ONCE 数据集上进行了测试。实验结果表明,我们的方法显著提高了对自行车等反射点较少的物体的检测精度。
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.