{"title":"Robust object tracking via multi-task dynamic sparse model","authors":"Zhangjian Ji, Weiqiang Wang","doi":"10.1109/ICIP.2014.7025078","DOIUrl":null,"url":null,"abstract":"Recently, sparse representation has been widely applied to some generative tracking methods, which learn the representation of each particle independently and do not consider the correlation between the representation of each particle in the time domain. In this paper, we formulate the object tracking in a particle filter framework as a multi-task dynamic sparse learning problem, which we denote as Multi-Task Dynamic Sparse Tracking(MTDST). By exploring the popular sparsity-inducing ℓ1, 2 mixed norms, we regularize the representation problem to enforce joint sparsity and learn the particle representations together. Meanwhile, we also introduce the innovation sparse term in the tracking model. As compared to previous methods, our method mines the independencies between particles and the correlation of particle representation in the time domain, which improves the tracking performance. In addition, because the loft least square is robust to the outliers, we adopt the loft least square to replace the least square to calculate the likelihood probability. In the updating scheme, we eliminate the influences of occlusion pixels when updating the templates. The comprehensive experiments on the several challenging image sequences demonstrate that the proposed method consistently outperforms the existing state-of-the-art methods.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2014.7025078","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Recently, sparse representation has been widely applied to some generative tracking methods, which learn the representation of each particle independently and do not consider the correlation between the representation of each particle in the time domain. In this paper, we formulate the object tracking in a particle filter framework as a multi-task dynamic sparse learning problem, which we denote as Multi-Task Dynamic Sparse Tracking(MTDST). By exploring the popular sparsity-inducing ℓ1, 2 mixed norms, we regularize the representation problem to enforce joint sparsity and learn the particle representations together. Meanwhile, we also introduce the innovation sparse term in the tracking model. As compared to previous methods, our method mines the independencies between particles and the correlation of particle representation in the time domain, which improves the tracking performance. In addition, because the loft least square is robust to the outliers, we adopt the loft least square to replace the least square to calculate the likelihood probability. In the updating scheme, we eliminate the influences of occlusion pixels when updating the templates. The comprehensive experiments on the several challenging image sequences demonstrate that the proposed method consistently outperforms the existing state-of-the-art methods.