UpBEV: Fast and Accurate LiDAR-Based Drivable Region Detection Utilizing Uniform Polar BEV

IF 14 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hao Wen;Tianci Wang;Yong Chen;Chunhua Liu
{"title":"UpBEV: Fast and Accurate LiDAR-Based Drivable Region Detection Utilizing Uniform Polar BEV","authors":"Hao Wen;Tianci Wang;Yong Chen;Chunhua Liu","doi":"10.1109/TIV.2024.3387330","DOIUrl":null,"url":null,"abstract":"Drivable region detection is a crucial upstream task for autonomous navigation, so speed and accuracy are the most critical indicators for safe driving. In this article, we proposed a novel representation paradigm for LiDAR data, whereby the drivable region can be efficiently detected and transformed into a dense region in the bird's eye view. Our method differs from the conventional spatial feature extraction and deep learning-based computation-intensive methods. Based on the proposed representation paradigm, our method takes full advantage of image-based features and processing to capture the boundaries between drivable and non-drivable regions within 10 ms solely on a CPU clocked at 4.0 GHz, thus suitable for most mobile platforms with various computational resources. Our contributions are fourfold. Firstly, we propose UpBEV, a representation addressing the sparsity of the point cloud from LiDAR. With this representation, the boundaries are projected into a 2D image and become distinguishable. Second, we develop a complete framework for road detection based on UpBEV, directly generating a dense top-view drivable region that is essential for navigation. Third, with comprehensive experiments on KITTI-Road dataset and SemanticKITTI dataset, the accuracy, speed, and robustness of our method are demonstrated well. Particularly, our method outperforms all the state-of-the-art non-learning methods on the KITTI-Road Benchmark in both maximum F1-measure and runtime, regardless of data type.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 10","pages":"6648-6659"},"PeriodicalIF":14.0000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Intelligent Vehicles","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10496244/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Drivable region detection is a crucial upstream task for autonomous navigation, so speed and accuracy are the most critical indicators for safe driving. In this article, we proposed a novel representation paradigm for LiDAR data, whereby the drivable region can be efficiently detected and transformed into a dense region in the bird's eye view. Our method differs from the conventional spatial feature extraction and deep learning-based computation-intensive methods. Based on the proposed representation paradigm, our method takes full advantage of image-based features and processing to capture the boundaries between drivable and non-drivable regions within 10 ms solely on a CPU clocked at 4.0 GHz, thus suitable for most mobile platforms with various computational resources. Our contributions are fourfold. Firstly, we propose UpBEV, a representation addressing the sparsity of the point cloud from LiDAR. With this representation, the boundaries are projected into a 2D image and become distinguishable. Second, we develop a complete framework for road detection based on UpBEV, directly generating a dense top-view drivable region that is essential for navigation. Third, with comprehensive experiments on KITTI-Road dataset and SemanticKITTI dataset, the accuracy, speed, and robustness of our method are demonstrated well. Particularly, our method outperforms all the state-of-the-art non-learning methods on the KITTI-Road Benchmark in both maximum F1-measure and runtime, regardless of data type.
UpBEV:利用均匀极性BEV快速准确的基于激光雷达的可驾驶区域检测
可驾驶区域检测是自主导航的重要上游任务,速度和精度是安全驾驶最关键的指标。在本文中,我们提出了一种新的LiDAR数据表示范式,该范式可以有效地检测可驾驶区域并将其转换为鸟瞰图中的密集区域。我们的方法不同于传统的空间特征提取和基于深度学习的计算密集型方法。基于所提出的表示范式,我们的方法充分利用了基于图像的特征和处理,仅在4.0 GHz的CPU上就可以在10 ms内捕获可驱动和不可驱动区域之间的边界,因此适用于具有各种计算资源的大多数移动平台。我们的贡献是四倍的。首先,我们提出了UpBEV,一种解决激光雷达点云稀疏性的表示。通过这种表示,边界被投影到二维图像中并变得可区分。其次,我们基于UpBEV开发了一个完整的道路检测框架,直接生成密集的顶视图可驾驶区域,这对导航至关重要。第三,通过对KITTI-Road数据集和SemanticKITTI数据集的综合实验,验证了该方法的准确性、速度和鲁棒性。特别是,无论数据类型如何,我们的方法在最大f1测量和运行时间方面都优于KITTI-Road基准上所有最先进的非学习方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Intelligent Vehicles
IEEE Transactions on Intelligent Vehicles Mathematics-Control and Optimization
CiteScore
12.10
自引率
13.40%
发文量
177
期刊介绍: The IEEE Transactions on Intelligent Vehicles (T-IV) is a premier platform for publishing peer-reviewed articles that present innovative research concepts, application results, significant theoretical findings, and application case studies in the field of intelligent vehicles. With a particular emphasis on automated vehicles within roadway environments, T-IV aims to raise awareness of pressing research and application challenges. Our focus is on providing critical information to the intelligent vehicle community, serving as a dissemination vehicle for IEEE ITS Society members and others interested in learning about the state-of-the-art developments and progress in research and applications related to intelligent vehicles. Join us in advancing knowledge and innovation in this dynamic field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信