Similarity learning for template-based visual tracking

Xiuzhuang Zhou, Lu Kou, Hui Ding, Xiaoyan Fu, Yuanyuan Shang
{"title":"Similarity learning for template-based visual tracking","authors":"Xiuzhuang Zhou, Lu Kou, Hui Ding, Xiaoyan Fu, Yuanyuan Shang","doi":"10.1109/ICMEW.2014.6890723","DOIUrl":null,"url":null,"abstract":"Most existing discriminative models for visual tracking are often formulated as supervised learning of a binary classification function, whose continuous output is then cast into a specific tracking framework as the confidence of the visual target. We argue that this might be less accurate since the classifier is learned for making binary decision, rather than predicting the similarity score between the candidate image patches and the true target. On the other hand, a generative tracker aims at learning a compact object representation for updating of the visual appearance. This, however, ignores the useful information from background regions surroundding the visual target, and hence might not well separate the visual target from the background distracters. We propose in this work a visual tracking scheme, in which a similarity function is explicitly learned in a generative tracking framework to significantly alleviate the drifting problem suffered by many existing trackers. Experimental results on various challenging human sequences, involving significant appearance changes, severe occlusions, and cluttered backgrounds, demonstrate the effectiveness of our approach compared to the state-of-the-art alternatives.","PeriodicalId":178700,"journal":{"name":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW.2014.6890723","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Most existing discriminative models for visual tracking are often formulated as supervised learning of a binary classification function, whose continuous output is then cast into a specific tracking framework as the confidence of the visual target. We argue that this might be less accurate since the classifier is learned for making binary decision, rather than predicting the similarity score between the candidate image patches and the true target. On the other hand, a generative tracker aims at learning a compact object representation for updating of the visual appearance. This, however, ignores the useful information from background regions surroundding the visual target, and hence might not well separate the visual target from the background distracters. We propose in this work a visual tracking scheme, in which a similarity function is explicitly learned in a generative tracking framework to significantly alleviate the drifting problem suffered by many existing trackers. Experimental results on various challenging human sequences, involving significant appearance changes, severe occlusions, and cluttered backgrounds, demonstrate the effectiveness of our approach compared to the state-of-the-art alternatives.
基于模板的视觉跟踪相似性学习
大多数现有的视觉跟踪判别模型通常被表述为二元分类函数的监督学习,然后将其连续输出作为视觉目标的置信度投射到特定的跟踪框架中。我们认为这可能不太准确,因为分类器是为了做出二值决策而学习的,而不是预测候选图像补丁和真实目标之间的相似度得分。另一方面,生成跟踪器旨在学习紧凑的对象表示以更新视觉外观。然而,这忽略了来自视觉目标周围背景区域的有用信息,因此可能无法很好地将视觉目标与背景干扰物分开。在这项工作中,我们提出了一种视觉跟踪方案,该方案在生成式跟踪框架中明确学习相似函数,以显着缓解许多现有跟踪器所遭受的漂移问题。在各种具有挑战性的人类序列上的实验结果,包括显著的外观变化,严重的闭塞和杂乱的背景,证明了我们的方法与最先进的替代方法相比的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信