Wei Wang, Kun Duan, Tai-Peng Tian, Ting Yu, Ser-Nam Lim, H. Qi
{"title":"Visual tracking based on object appearance and structure preserved local patches matching","authors":"Wei Wang, Kun Duan, Tai-Peng Tian, Ting Yu, Ser-Nam Lim, H. Qi","doi":"10.1109/AVSS.2016.7738065","DOIUrl":null,"url":null,"abstract":"Drift is the most difficult issue in object visual tracking based on framework of “tracking-by-detection”. Due to the self-taught learning, the mis-aligned samples are potentially to be incorporated in learning and degrade the discrimination of the tracker. This paper proposes a new tracking approach that resolves this problem by three multi-level collaborative components: a high-level global appearance tracker provides a basic prediction, upon which the structure preserved low-level local patches matching helps to guarantee precise tracking with minimized drift. Those local patches are deliberately deployed on the foreground object via foreground/background segmentation, which is realized by a simple and efficient classifier trained by super-pixel segments. Experimental results show that the three closely collaborated components enable our tracker runs in real time and performs favourably against state-of-the-art approaches on challenging benchmark sequences.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2016.7738065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Drift is the most difficult issue in object visual tracking based on framework of “tracking-by-detection”. Due to the self-taught learning, the mis-aligned samples are potentially to be incorporated in learning and degrade the discrimination of the tracker. This paper proposes a new tracking approach that resolves this problem by three multi-level collaborative components: a high-level global appearance tracker provides a basic prediction, upon which the structure preserved low-level local patches matching helps to guarantee precise tracking with minimized drift. Those local patches are deliberately deployed on the foreground object via foreground/background segmentation, which is realized by a simple and efficient classifier trained by super-pixel segments. Experimental results show that the three closely collaborated components enable our tracker runs in real time and performs favourably against state-of-the-art approaches on challenging benchmark sequences.