{"title":"基于轨迹时空特征和双向映射的视频-雷达自动配准方法","authors":"Kong Li;Zhe Dai;Hua Cui;Xuan Wang;Huansheng Song","doi":"10.1109/TCSVT.2025.3554441","DOIUrl":null,"url":null,"abstract":"Automating video and radar spatial registration without sensor layout constraints is crucial for enhancing the flexibility of perception systems. However, this remains challenging due to the lack of effective approaches for constructing and utilizing matching information between heterogeneous sensors. Existing methods rely on human intervention or prior knowledge, making it difficult to achieve true automation. Consequently, establishing a registration model that automatically extracts matching information from heterogeneous sensor data remains a key challenge. To address these issues, we propose a novel Video-Radar Automatic Registration (VRAR) method based on vehicle trajectory spatiotemporal feature encoding and a bidirectional mapping network. We first establish a unified representation for heterogeneous sensor data by encoding spatiotemporal features of vehicle trajectories. Based on this, we automatically extract a large number of high-quality matching points from synchronized trajectory pairs using a frame synchronization strategy. Subsequently, we utilize the proposed Video-Radar Bidirectional Mapping Network to process these matching points. This network learns the bidirectional mapping between the two sensor modalities, extending the alignment from discrete local observation points to the entire observable space. Experimental results demonstrate that the VRAR method exhibits significant performance advantages in various traffic scenarios, verifying its effectiveness and generalizability. This capability of automated and adaptive registration highlights the method’s potential for broader applications in heterogeneous sensor integration.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 9","pages":"8707-8722"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VRAR: Video-Radar Automatic Registration Method Based on Trajectory Spatiotemporal Features and Bidirectional Mapping\",\"authors\":\"Kong Li;Zhe Dai;Hua Cui;Xuan Wang;Huansheng Song\",\"doi\":\"10.1109/TCSVT.2025.3554441\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automating video and radar spatial registration without sensor layout constraints is crucial for enhancing the flexibility of perception systems. However, this remains challenging due to the lack of effective approaches for constructing and utilizing matching information between heterogeneous sensors. Existing methods rely on human intervention or prior knowledge, making it difficult to achieve true automation. Consequently, establishing a registration model that automatically extracts matching information from heterogeneous sensor data remains a key challenge. To address these issues, we propose a novel Video-Radar Automatic Registration (VRAR) method based on vehicle trajectory spatiotemporal feature encoding and a bidirectional mapping network. We first establish a unified representation for heterogeneous sensor data by encoding spatiotemporal features of vehicle trajectories. Based on this, we automatically extract a large number of high-quality matching points from synchronized trajectory pairs using a frame synchronization strategy. Subsequently, we utilize the proposed Video-Radar Bidirectional Mapping Network to process these matching points. This network learns the bidirectional mapping between the two sensor modalities, extending the alignment from discrete local observation points to the entire observable space. Experimental results demonstrate that the VRAR method exhibits significant performance advantages in various traffic scenarios, verifying its effectiveness and generalizability. This capability of automated and adaptive registration highlights the method’s potential for broader applications in heterogeneous sensor integration.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 9\",\"pages\":\"8707-8722\"},\"PeriodicalIF\":11.1000,\"publicationDate\":\"2025-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10938727/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10938727/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
VRAR: Video-Radar Automatic Registration Method Based on Trajectory Spatiotemporal Features and Bidirectional Mapping
Automating video and radar spatial registration without sensor layout constraints is crucial for enhancing the flexibility of perception systems. However, this remains challenging due to the lack of effective approaches for constructing and utilizing matching information between heterogeneous sensors. Existing methods rely on human intervention or prior knowledge, making it difficult to achieve true automation. Consequently, establishing a registration model that automatically extracts matching information from heterogeneous sensor data remains a key challenge. To address these issues, we propose a novel Video-Radar Automatic Registration (VRAR) method based on vehicle trajectory spatiotemporal feature encoding and a bidirectional mapping network. We first establish a unified representation for heterogeneous sensor data by encoding spatiotemporal features of vehicle trajectories. Based on this, we automatically extract a large number of high-quality matching points from synchronized trajectory pairs using a frame synchronization strategy. Subsequently, we utilize the proposed Video-Radar Bidirectional Mapping Network to process these matching points. This network learns the bidirectional mapping between the two sensor modalities, extending the alignment from discrete local observation points to the entire observable space. Experimental results demonstrate that the VRAR method exhibits significant performance advantages in various traffic scenarios, verifying its effectiveness and generalizability. This capability of automated and adaptive registration highlights the method’s potential for broader applications in heterogeneous sensor integration.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.