Hanyang Zhuang;Yeqiang Qian;Minghu Wu;Chunxiang Wang;Ming Yang
{"title":"VR-DataAug: An Efficient Data Augmentation Method for Multicamera Vehicle Tracking","authors":"Hanyang Zhuang;Yeqiang Qian;Minghu Wu;Chunxiang Wang;Ming Yang","doi":"10.1109/JSEN.2025.3596067","DOIUrl":null,"url":null,"abstract":"Multicamera vehicle tracking refers to tracking the same vehicle in multiple cameras in various locations, which aids in traffic flow analysis and prediction. But, collecting and labeling extensive multicamera vehicle tracking datasets for cities is challenging due to the spatio-temporal complexity, hindering the performance of multicamera vehicle tracking algorithm development. Simulations can produce vast, automatically labeled datasets. However, there is a significant domain gap between virtual and real vehicles, affecting style features like texture and illumination, as well as apparent features like scale and pose. We introduce VR-DataAug, a data augmentation method merging virtual and real data with consistent style and apparent features. A Background Modeling With Detection Feedback module creates a clean background and extracts vehicle instances. A Multiattribute Vehicle Apparent Modeling module utilizes a classifier to learn apparent features from various camera viewpoints, preserving scale, position, and orientation information between virtual and real vehicles. A Virtual Vehicle and Real Background Fusion module uses a generative model to ensure texture consistency and merge virtual vehicles into real traffic scenes. Extensive experiments on the CityFlow dataset demonstrate that our approach improves detection performance by 3.4% mAP, enhances the vehicle re-identification model by 3.84%, boosts multi camera vehicle tracking by increasing IDF1 metrics by 4.25%, and highlighting its potential to expand training sets while minimizing domain offset.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 18","pages":"35426-35437"},"PeriodicalIF":4.3000,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Journal","FirstCategoryId":"103","ListUrlMain":"https://ieeexplore.ieee.org/document/11122425/","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Multicamera vehicle tracking refers to tracking the same vehicle in multiple cameras in various locations, which aids in traffic flow analysis and prediction. But, collecting and labeling extensive multicamera vehicle tracking datasets for cities is challenging due to the spatio-temporal complexity, hindering the performance of multicamera vehicle tracking algorithm development. Simulations can produce vast, automatically labeled datasets. However, there is a significant domain gap between virtual and real vehicles, affecting style features like texture and illumination, as well as apparent features like scale and pose. We introduce VR-DataAug, a data augmentation method merging virtual and real data with consistent style and apparent features. A Background Modeling With Detection Feedback module creates a clean background and extracts vehicle instances. A Multiattribute Vehicle Apparent Modeling module utilizes a classifier to learn apparent features from various camera viewpoints, preserving scale, position, and orientation information between virtual and real vehicles. A Virtual Vehicle and Real Background Fusion module uses a generative model to ensure texture consistency and merge virtual vehicles into real traffic scenes. Extensive experiments on the CityFlow dataset demonstrate that our approach improves detection performance by 3.4% mAP, enhances the vehicle re-identification model by 3.84%, boosts multi camera vehicle tracking by increasing IDF1 metrics by 4.25%, and highlighting its potential to expand training sets while minimizing domain offset.
期刊介绍:
The fields of interest of the IEEE Sensors Journal are the theory, design , fabrication, manufacturing and applications of devices for sensing and transducing physical, chemical and biological phenomena, with emphasis on the electronics and physics aspect of sensors and integrated sensors-actuators. IEEE Sensors Journal deals with the following:
-Sensor Phenomenology, Modelling, and Evaluation
-Sensor Materials, Processing, and Fabrication
-Chemical and Gas Sensors
-Microfluidics and Biosensors
-Optical Sensors
-Physical Sensors: Temperature, Mechanical, Magnetic, and others
-Acoustic and Ultrasonic Sensors
-Sensor Packaging
-Sensor Networks
-Sensor Applications
-Sensor Systems: Signals, Processing, and Interfaces
-Actuators and Sensor Power Systems
-Sensor Signal Processing for high precision and stability (amplification, filtering, linearization, modulation/demodulation) and under harsh conditions (EMC, radiation, humidity, temperature); energy consumption/harvesting
-Sensor Data Processing (soft computing with sensor data, e.g., pattern recognition, machine learning, evolutionary computation; sensor data fusion, processing of wave e.g., electromagnetic and acoustic; and non-wave, e.g., chemical, gravity, particle, thermal, radiative and non-radiative sensor data, detection, estimation and classification based on sensor data)
-Sensors in Industrial Practice