{"title":"利用自适应时空加权正则化实现无人机的鲁棒视觉跟踪","authors":"Zhi Chen, Lijun Liu, Zhen Yu","doi":"10.1007/s00371-024-03290-w","DOIUrl":null,"url":null,"abstract":"<p>The unmanned aerial vehicles (UAV) visual object tracking method based on the discriminative correlation filter (DCF) has gained extensive research and attention due to its superior computation and extraordinary progress, but is always suffers from unnecessary boundary effects. To solve the aforementioned problems, a spatial-temporal regularization correlation filter framework is proposed, which is achieved by introducing a constant regularization term to penalize the coefficients of the DCF filter. The tracker can substantially improve the tracking performance but increase computational complexity. However, these kinds of methods make the object fail to adapt to specific appearance variations, and we need to pay much effort in fine-tuning the spatial-temporal regularization weight coefficients. In this work, an adaptive spatial-temporal weighted regularization (ASTWR) model is proposed. An ASTWR module is introduced to obtain the weighted spatial-temporal regularization coefficients automatically. The proposed ASTWR model can deal effectively with complex situations and substantially improve the credibility of tracking results. In addition, an adaptive spatial-temporal constraint adjusting mechanism is proposed. By repressing the drastic appearance changes between adjacent frames, the tracker enables smooth filter learning in the detection phase. Substantial experiments show that the proposed tracker performs favorably against homogeneous UAV-based and DCF-based trackers. Moreover, the ASTWR tracker reaches over 35 FPS on a single CPU platform, and gains an AUC score of 57.9% and 49.7% on the UAV123 and VisDrone2020 datasets, respectively.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward robust visual tracking for UAV with adaptive spatial-temporal weighted regularization\",\"authors\":\"Zhi Chen, Lijun Liu, Zhen Yu\",\"doi\":\"10.1007/s00371-024-03290-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The unmanned aerial vehicles (UAV) visual object tracking method based on the discriminative correlation filter (DCF) has gained extensive research and attention due to its superior computation and extraordinary progress, but is always suffers from unnecessary boundary effects. To solve the aforementioned problems, a spatial-temporal regularization correlation filter framework is proposed, which is achieved by introducing a constant regularization term to penalize the coefficients of the DCF filter. The tracker can substantially improve the tracking performance but increase computational complexity. However, these kinds of methods make the object fail to adapt to specific appearance variations, and we need to pay much effort in fine-tuning the spatial-temporal regularization weight coefficients. In this work, an adaptive spatial-temporal weighted regularization (ASTWR) model is proposed. An ASTWR module is introduced to obtain the weighted spatial-temporal regularization coefficients automatically. The proposed ASTWR model can deal effectively with complex situations and substantially improve the credibility of tracking results. In addition, an adaptive spatial-temporal constraint adjusting mechanism is proposed. By repressing the drastic appearance changes between adjacent frames, the tracker enables smooth filter learning in the detection phase. Substantial experiments show that the proposed tracker performs favorably against homogeneous UAV-based and DCF-based trackers. Moreover, the ASTWR tracker reaches over 35 FPS on a single CPU platform, and gains an AUC score of 57.9% and 49.7% on the UAV123 and VisDrone2020 datasets, respectively.</p>\",\"PeriodicalId\":501186,\"journal\":{\"name\":\"The Visual Computer\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Visual Computer\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00371-024-03290-w\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03290-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Toward robust visual tracking for UAV with adaptive spatial-temporal weighted regularization
The unmanned aerial vehicles (UAV) visual object tracking method based on the discriminative correlation filter (DCF) has gained extensive research and attention due to its superior computation and extraordinary progress, but is always suffers from unnecessary boundary effects. To solve the aforementioned problems, a spatial-temporal regularization correlation filter framework is proposed, which is achieved by introducing a constant regularization term to penalize the coefficients of the DCF filter. The tracker can substantially improve the tracking performance but increase computational complexity. However, these kinds of methods make the object fail to adapt to specific appearance variations, and we need to pay much effort in fine-tuning the spatial-temporal regularization weight coefficients. In this work, an adaptive spatial-temporal weighted regularization (ASTWR) model is proposed. An ASTWR module is introduced to obtain the weighted spatial-temporal regularization coefficients automatically. The proposed ASTWR model can deal effectively with complex situations and substantially improve the credibility of tracking results. In addition, an adaptive spatial-temporal constraint adjusting mechanism is proposed. By repressing the drastic appearance changes between adjacent frames, the tracker enables smooth filter learning in the detection phase. Substantial experiments show that the proposed tracker performs favorably against homogeneous UAV-based and DCF-based trackers. Moreover, the ASTWR tracker reaches over 35 FPS on a single CPU platform, and gains an AUC score of 57.9% and 49.7% on the UAV123 and VisDrone2020 datasets, respectively.