{"title":"PCTrack:资源受限边缘设备上实时视频分析的精确对象跟踪","authors":"Xinyi Zhang;Haoran Xu;Chenyun Yu;Guang Tan","doi":"10.1109/TCSVT.2024.3523204","DOIUrl":null,"url":null,"abstract":"The task of live video analytics relies on real-time object tracking that typically involves computationally expensive deep neural network (DNN) models. In practice, it has become essential to process video data on edge devices deployed near the cameras. However, these edge devices often have very limited computing resources and thus suffer from poor tracking accuracy. Through a measurement study, we identify three major factors contributing to the performance issue: outdated detection results, tracking error accumulation, and ignorance of new objects. We introduce a novel approach, called Predict & Correct based Tracking, or <monospace>PCTrack</monospace>, to systematically address these problems. Our design incorporates three innovative components: 1) a Predictive Detection Propagator that rapidly updates outdated object bounding boxes to match the current frame through a lightweight prediction model; 2) a Frame Difference Corrector that refines the object bounding boxes based on frame difference information; and 3) a New Object Detector that efficiently discovers newly appearing objects during tracking. Experimental results show that our approach achieves remarkable accuracy improvements, ranging from 19.4% to 34.7%, across diverse traffic scenarios, compared to state of the art methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"3969-3982"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PCTrack: Accurate Object Tracking for Live Video Analytics on Resource-Constrained Edge Devices\",\"authors\":\"Xinyi Zhang;Haoran Xu;Chenyun Yu;Guang Tan\",\"doi\":\"10.1109/TCSVT.2024.3523204\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The task of live video analytics relies on real-time object tracking that typically involves computationally expensive deep neural network (DNN) models. In practice, it has become essential to process video data on edge devices deployed near the cameras. However, these edge devices often have very limited computing resources and thus suffer from poor tracking accuracy. Through a measurement study, we identify three major factors contributing to the performance issue: outdated detection results, tracking error accumulation, and ignorance of new objects. We introduce a novel approach, called Predict & Correct based Tracking, or <monospace>PCTrack</monospace>, to systematically address these problems. Our design incorporates three innovative components: 1) a Predictive Detection Propagator that rapidly updates outdated object bounding boxes to match the current frame through a lightweight prediction model; 2) a Frame Difference Corrector that refines the object bounding boxes based on frame difference information; and 3) a New Object Detector that efficiently discovers newly appearing objects during tracking. Experimental results show that our approach achieves remarkable accuracy improvements, ranging from 19.4% to 34.7%, across diverse traffic scenarios, compared to state of the art methods.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"35 5\",\"pages\":\"3969-3982\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-12-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10816419/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10816419/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
PCTrack: Accurate Object Tracking for Live Video Analytics on Resource-Constrained Edge Devices
The task of live video analytics relies on real-time object tracking that typically involves computationally expensive deep neural network (DNN) models. In practice, it has become essential to process video data on edge devices deployed near the cameras. However, these edge devices often have very limited computing resources and thus suffer from poor tracking accuracy. Through a measurement study, we identify three major factors contributing to the performance issue: outdated detection results, tracking error accumulation, and ignorance of new objects. We introduce a novel approach, called Predict & Correct based Tracking, or PCTrack, to systematically address these problems. Our design incorporates three innovative components: 1) a Predictive Detection Propagator that rapidly updates outdated object bounding boxes to match the current frame through a lightweight prediction model; 2) a Frame Difference Corrector that refines the object bounding boxes based on frame difference information; and 3) a New Object Detector that efficiently discovers newly appearing objects during tracking. Experimental results show that our approach achieves remarkable accuracy improvements, ranging from 19.4% to 34.7%, across diverse traffic scenarios, compared to state of the art methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.