C-DHV: A Cascaded Deep Hough Voting-Based Tracking Algorithm for LiDAR Point Clouds

IF 5.6 2区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Anqi Xu;Jiahao Nie;Zhiwei He;Xudong Lv
{"title":"C-DHV: A Cascaded Deep Hough Voting-Based Tracking Algorithm for LiDAR Point Clouds","authors":"Anqi Xu;Jiahao Nie;Zhiwei He;Xudong Lv","doi":"10.1109/TIM.2024.3497183","DOIUrl":null,"url":null,"abstract":"A LiDAR-based 3-D object tracking system has been widely used in various scenarios such as autonomous driving and video surveillance, as it provides real-time and accurate object locations. Existing 3-D object tracking algorithms have achieved success by employing deep Hough voting to generate 3-D proposals. However, only one-stage voting adopted to generate 3-D proposals leads to inaccurate localization and degraded performance in complex scenarios with substantial background distractors and drastic appearance change. In this article, we propose a novel cascaded deep Hough voting (C-DHV) algorithm, which employs multistage voting to iteratively refine the 3-D proposals. Specifically, in each voting stage, the geometric locations and features of 3-D proposals are refined, which provides better initialization for the next voting stage. To improve the discriminative ability of C-DHV, the hierarchical features are fully leveraged by a feature transfer module to guide each voting stage, which enables to fuse the deep-layer features into low-level voting stage. Besides, a transformer-based feature clustering module is developed to adaptively aggregate features of 3-D proposals delivered from multistage voting, which promotes the prediction of the most accurate proposal as the final tracking result. Extensive experiments on challenging KITTI, NuScenes, and Waymo Open Dataset show that our C-DHV achieves competitive performance compared to state-of-the-art methods and significantly outperforms the one-stage voting counterpart.","PeriodicalId":13341,"journal":{"name":"IEEE Transactions on Instrumentation and Measurement","volume":"74 ","pages":"1-11"},"PeriodicalIF":5.6000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Instrumentation and Measurement","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10752536/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

A LiDAR-based 3-D object tracking system has been widely used in various scenarios such as autonomous driving and video surveillance, as it provides real-time and accurate object locations. Existing 3-D object tracking algorithms have achieved success by employing deep Hough voting to generate 3-D proposals. However, only one-stage voting adopted to generate 3-D proposals leads to inaccurate localization and degraded performance in complex scenarios with substantial background distractors and drastic appearance change. In this article, we propose a novel cascaded deep Hough voting (C-DHV) algorithm, which employs multistage voting to iteratively refine the 3-D proposals. Specifically, in each voting stage, the geometric locations and features of 3-D proposals are refined, which provides better initialization for the next voting stage. To improve the discriminative ability of C-DHV, the hierarchical features are fully leveraged by a feature transfer module to guide each voting stage, which enables to fuse the deep-layer features into low-level voting stage. Besides, a transformer-based feature clustering module is developed to adaptively aggregate features of 3-D proposals delivered from multistage voting, which promotes the prediction of the most accurate proposal as the final tracking result. Extensive experiments on challenging KITTI, NuScenes, and Waymo Open Dataset show that our C-DHV achieves competitive performance compared to state-of-the-art methods and significantly outperforms the one-stage voting counterpart.
C-DHV:基于级联深度 Hough 投票的激光雷达点云跟踪算法
基于激光雷达的三维物体跟踪系统可提供实时、准确的物体位置,因此被广泛应用于自动驾驶和视频监控等各种场景。现有的三维物体跟踪算法通过采用深度 Hough 投票来生成三维建议,取得了成功。然而,仅采用单级投票来生成三维提案会导致定位不准确,在背景干扰因素较多、外观变化剧烈的复杂场景中性能下降。在本文中,我们提出了一种新颖的级联深度 Hough 投票(C-DHV)算法,该算法采用多阶段投票来迭代完善三维提案。具体来说,在每个投票阶段,都会对三维提案的几何位置和特征进行细化,从而为下一个投票阶段提供更好的初始化。为了提高 C-DHV 的判别能力,特征转移模块充分利用了分层特征来指导每个投票阶段,从而将深层特征融合到低层投票阶段。此外,还开发了一个基于变换器的特征聚类模块,用于自适应地聚合多级投票所产生的三维提案特征,从而促进预测出最准确的提案作为最终跟踪结果。在具有挑战性的 KITTI、NuScenes 和 Waymo 开放数据集上进行的大量实验表明,与最先进的方法相比,我们的 C-DHV 实现了具有竞争力的性能,并显著优于单级投票法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Instrumentation and Measurement
IEEE Transactions on Instrumentation and Measurement 工程技术-工程:电子与电气
CiteScore
9.00
自引率
23.20%
发文量
1294
审稿时长
3.9 months
期刊介绍: Papers are sought that address innovative solutions to the development and use of electrical and electronic instruments and equipment to measure, monitor and/or record physical phenomena for the purpose of advancing measurement science, methods, functionality and applications. The scope of these papers may encompass: (1) theory, methodology, and practice of measurement; (2) design, development and evaluation of instrumentation and measurement systems and components used in generating, acquiring, conditioning and processing signals; (3) analysis, representation, display, and preservation of the information obtained from a set of measurements; and (4) scientific and technical support to establishment and maintenance of technical standards in the field of Instrumentation and Measurement.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信