VRAR: Video-Radar Automatic Registration Method Based on Trajectory Spatiotemporal Features and Bidirectional Mapping

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Kong Li;Zhe Dai;Hua Cui;Xuan Wang;Huansheng Song
{"title":"VRAR: Video-Radar Automatic Registration Method Based on Trajectory Spatiotemporal Features and Bidirectional Mapping","authors":"Kong Li;Zhe Dai;Hua Cui;Xuan Wang;Huansheng Song","doi":"10.1109/TCSVT.2025.3554441","DOIUrl":null,"url":null,"abstract":"Automating video and radar spatial registration without sensor layout constraints is crucial for enhancing the flexibility of perception systems. However, this remains challenging due to the lack of effective approaches for constructing and utilizing matching information between heterogeneous sensors. Existing methods rely on human intervention or prior knowledge, making it difficult to achieve true automation. Consequently, establishing a registration model that automatically extracts matching information from heterogeneous sensor data remains a key challenge. To address these issues, we propose a novel Video-Radar Automatic Registration (VRAR) method based on vehicle trajectory spatiotemporal feature encoding and a bidirectional mapping network. We first establish a unified representation for heterogeneous sensor data by encoding spatiotemporal features of vehicle trajectories. Based on this, we automatically extract a large number of high-quality matching points from synchronized trajectory pairs using a frame synchronization strategy. Subsequently, we utilize the proposed Video-Radar Bidirectional Mapping Network to process these matching points. This network learns the bidirectional mapping between the two sensor modalities, extending the alignment from discrete local observation points to the entire observable space. Experimental results demonstrate that the VRAR method exhibits significant performance advantages in various traffic scenarios, verifying its effectiveness and generalizability. This capability of automated and adaptive registration highlights the method’s potential for broader applications in heterogeneous sensor integration.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8707-8722"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10938727/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Automating video and radar spatial registration without sensor layout constraints is crucial for enhancing the flexibility of perception systems. However, this remains challenging due to the lack of effective approaches for constructing and utilizing matching information between heterogeneous sensors. Existing methods rely on human intervention or prior knowledge, making it difficult to achieve true automation. Consequently, establishing a registration model that automatically extracts matching information from heterogeneous sensor data remains a key challenge. To address these issues, we propose a novel Video-Radar Automatic Registration (VRAR) method based on vehicle trajectory spatiotemporal feature encoding and a bidirectional mapping network. We first establish a unified representation for heterogeneous sensor data by encoding spatiotemporal features of vehicle trajectories. Based on this, we automatically extract a large number of high-quality matching points from synchronized trajectory pairs using a frame synchronization strategy. Subsequently, we utilize the proposed Video-Radar Bidirectional Mapping Network to process these matching points. This network learns the bidirectional mapping between the two sensor modalities, extending the alignment from discrete local observation points to the entire observable space. Experimental results demonstrate that the VRAR method exhibits significant performance advantages in various traffic scenarios, verifying its effectiveness and generalizability. This capability of automated and adaptive registration highlights the method’s potential for broader applications in heterogeneous sensor integration.
基于轨迹时空特征和双向映射的视频-雷达自动配准方法
不受传感器布局约束的视频和雷达空间配准自动化是增强感知系统灵活性的关键。然而,由于缺乏构建和利用异构传感器之间匹配信息的有效方法,这仍然具有挑战性。现有的方法依赖于人为干预或先验知识,难以实现真正的自动化。因此,建立一种自动从异构传感器数据中提取匹配信息的配准模型仍然是一个关键的挑战。为了解决这些问题,我们提出了一种基于车辆轨迹时空特征编码和双向映射网络的视频-雷达自动配准方法。我们首先通过对车辆轨迹的时空特征进行编码,建立异构传感器数据的统一表示。在此基础上,采用帧同步策略从同步轨迹对中自动提取大量高质量的匹配点。随后,我们利用提出的视频-雷达双向映射网络对这些匹配点进行处理。该网络学习两个传感器模态之间的双向映射,将对准从离散的局部观测点扩展到整个可观测空间。实验结果表明,该方法在各种流量场景下均表现出显著的性能优势,验证了其有效性和可泛化性。这种自动化和自适应配准的能力突出了该方法在异构传感器集成中更广泛应用的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信