{"title":"S2*-ODM: Dual-Stage Improved PointPillar Feature-Based 3D Object Detection Method for Autonomous Driving.","authors":"Chen Hua, Xiaokun Zheng, Xinkai Kuang, Wencheng Zhang, Chunmao Jiang, Ziyu Chen, Biao Yu","doi":"10.3390/s25051581","DOIUrl":null,"url":null,"abstract":"<p><p>Three-dimensional (3D) object detection is crucial for autonomous driving, yet current PointPillar feature-based methods face challenges like under-segmentation, overlapping, and false detection, particularly in occluded scenarios. This paper presents a novel dual-stage improved PointPillar feature-based 3D object detection method (S2*-ODM) specifically designed to address these issues. The first innovation is the introduction of a dual-stage pillar feature encoding (S2-PFE) module, which effectively integrates both inter-pillar and intra-pillar relational features. This enhancement significantly improves the recognition of local structures and global distributions, enabling better differentiation of objects in occluded or overlapping environments. As a result, it reduces problems such as under-segmentation and false positives. The second key improvement is the incorporation of an attention mechanism within the backbone network, which refines feature extraction by emphasizing critical features in pseudo-images and suppressing irrelevant ones. This mechanism strengthens the network's ability to focus on essential object details. Experimental results on the KITTI dataset show that the proposed method outperforms the baseline, achieving notable improvements in detection accuracy, with average precision for 3D detection of cars, pedestrians, and cyclists increasing by 1.04%, 2.17%, and 3.72%, respectively. These innovations make S2*-ODM a significant advancement in enhancing the accuracy and reliability of 3D object detection for autonomous driving.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"25 5","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11902516/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sensors","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.3390/s25051581","RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, ANALYTICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Three-dimensional (3D) object detection is crucial for autonomous driving, yet current PointPillar feature-based methods face challenges like under-segmentation, overlapping, and false detection, particularly in occluded scenarios. This paper presents a novel dual-stage improved PointPillar feature-based 3D object detection method (S2*-ODM) specifically designed to address these issues. The first innovation is the introduction of a dual-stage pillar feature encoding (S2-PFE) module, which effectively integrates both inter-pillar and intra-pillar relational features. This enhancement significantly improves the recognition of local structures and global distributions, enabling better differentiation of objects in occluded or overlapping environments. As a result, it reduces problems such as under-segmentation and false positives. The second key improvement is the incorporation of an attention mechanism within the backbone network, which refines feature extraction by emphasizing critical features in pseudo-images and suppressing irrelevant ones. This mechanism strengthens the network's ability to focus on essential object details. Experimental results on the KITTI dataset show that the proposed method outperforms the baseline, achieving notable improvements in detection accuracy, with average precision for 3D detection of cars, pedestrians, and cyclists increasing by 1.04%, 2.17%, and 3.72%, respectively. These innovations make S2*-ODM a significant advancement in enhancing the accuracy and reliability of 3D object detection for autonomous driving.
期刊介绍:
Sensors (ISSN 1424-8220) provides an advanced forum for the science and technology of sensors and biosensors. It publishes reviews (including comprehensive reviews on the complete sensors products), regular research papers and short notes. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced.