DenseTracker:用于视觉跟踪的多任务密集网络

Fei Zhao, Ming Tang, Yi Wu, Jinqiao Wang
{"title":"DenseTracker:用于视觉跟踪的多任务密集网络","authors":"Fei Zhao, Ming Tang, Yi Wu, Jinqiao Wang","doi":"10.1109/ICME.2017.8019506","DOIUrl":null,"url":null,"abstract":"How to track an arbitrary object in video is one of the main challenges in computer vision, and it has been studied for decades. Based on hand-crafted features, traditional trackers show poor discriminability for complex changes of object appearance. Recently, some trackers based on convolutional neural network (CNN) have shown some promising results by exploiting the rich convolutional features. In this paper, we propose a novel DenseTracker based on a mutli-task dense convolutional network. To learn a more compact and discriminative representation, we adopt a dense block structure to ensemble features from different layers. Then a multitask loss is designed to accurately predict the object position and scale by joint learning of box regression and pair-wise similarity. Furtherly, the DenseTracker is trained end-to-end on large-scale datasets including ImageNet Video (VID) and ALOV300++. The DenseTracker runs in 25 fps on GPU and achieves the state-of-the-art performance on two public benchmarks of OTB50 and VOT2016.","PeriodicalId":330977,"journal":{"name":"2017 IEEE International Conference on Multimedia and Expo (ICME)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"DenseTracker: A multi-task dense network for visual tracking\",\"authors\":\"Fei Zhao, Ming Tang, Yi Wu, Jinqiao Wang\",\"doi\":\"10.1109/ICME.2017.8019506\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"How to track an arbitrary object in video is one of the main challenges in computer vision, and it has been studied for decades. Based on hand-crafted features, traditional trackers show poor discriminability for complex changes of object appearance. Recently, some trackers based on convolutional neural network (CNN) have shown some promising results by exploiting the rich convolutional features. In this paper, we propose a novel DenseTracker based on a mutli-task dense convolutional network. To learn a more compact and discriminative representation, we adopt a dense block structure to ensemble features from different layers. Then a multitask loss is designed to accurately predict the object position and scale by joint learning of box regression and pair-wise similarity. Furtherly, the DenseTracker is trained end-to-end on large-scale datasets including ImageNet Video (VID) and ALOV300++. The DenseTracker runs in 25 fps on GPU and achieves the state-of-the-art performance on two public benchmarks of OTB50 and VOT2016.\",\"PeriodicalId\":330977,\"journal\":{\"name\":\"2017 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME.2017.8019506\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2017.8019506","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

如何在视频中跟踪任意物体是计算机视觉的主要挑战之一,并且已经研究了几十年。传统的跟踪器基于手工特征,对物体外观的复杂变化具有较差的识别能力。近年来,一些基于卷积神经网络(CNN)的跟踪器利用其丰富的卷积特征,取得了不错的效果。在本文中,我们提出了一种新的基于多任务密集卷积网络的DenseTracker。为了学习更紧凑和判别的表示,我们采用密集的块结构来集成来自不同层的特征。然后设计了一种多任务损失算法,通过盒回归和成对相似度的联合学习来准确预测目标的位置和尺度。此外,DenseTracker在大规模数据集(包括ImageNet Video (VID)和alov300++)上进行端到端训练。DenseTracker在GPU上以25 fps的速度运行,并在OTB50和VOT2016两个公开基准测试中实现了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DenseTracker: A multi-task dense network for visual tracking
How to track an arbitrary object in video is one of the main challenges in computer vision, and it has been studied for decades. Based on hand-crafted features, traditional trackers show poor discriminability for complex changes of object appearance. Recently, some trackers based on convolutional neural network (CNN) have shown some promising results by exploiting the rich convolutional features. In this paper, we propose a novel DenseTracker based on a mutli-task dense convolutional network. To learn a more compact and discriminative representation, we adopt a dense block structure to ensemble features from different layers. Then a multitask loss is designed to accurately predict the object position and scale by joint learning of box regression and pair-wise similarity. Furtherly, the DenseTracker is trained end-to-end on large-scale datasets including ImageNet Video (VID) and ALOV300++. The DenseTracker runs in 25 fps on GPU and achieves the state-of-the-art performance on two public benchmarks of OTB50 and VOT2016.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信