Fast Visual Tracking Using Spatial Temporal Background Context Learning

Asif Mukhtar, Arslan Majid, Kashif Fahim
{"title":"Fast Visual Tracking Using Spatial Temporal Background Context Learning","authors":"Asif Mukhtar, Arslan Majid, Kashif Fahim","doi":"10.24203/IJCIT.V9I5.25","DOIUrl":null,"url":null,"abstract":"Visual Tracking by now has gained much provenience among researchers in recent years due to its vast variety of applications that occur in daily life. Various applications of visual tracking include counting of cars on a high way, analyzing the crowd intensity in a concert or a football ground or a surveillance camera tracking a single person to track its movements. Various techniques have been proposed and implemented in this research domain where researchers have analyzed various parameters. Still this area has a lot to offer. There are two common approaches that are currently deployed in visual tracking. One is discriminative tracking and the other one is generative tracking. Discriminative tracking requires a pre-trained model that requires the learning of the data and solves the object recognition as a binary classification problem. On the other hand, generative model in tracking makes use of the previous states so that next state can be predicted. In this paper, a novel tacking based on generative tracking method is proposed called as Illumination Inavariant Spatio Temporal Tracker (IISTC). The proposed technique takes into account of the nearby surrounding regions and performs context learning so that the state of the object under consideration and its surrounding regions can be estimated in the next frame. The learning model is deployed both in the spatial domain as well as the temporal domain. Spatial domain part of the tracker takes into consideration the nearby pixels in a frame while the temporal model takes account of the possible change of object location. The proposed tracker was tested on a set of 50 images against other state of the art four trackers. Experimental results reveal that our proposed tracker performs reasonably well as compared with other trackers. The proposed visual tracker is both efficiently with respect to computation power as well as accuracy. The proposed tracker takes only 4 fast Fourier transform computations thus making it reasonably faster. The proposed trackers perform exceptionally well when there is a sudden change in back ground illumination.","PeriodicalId":359510,"journal":{"name":"International Journal of Computer and Information Technology(2279-0764)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer and Information Technology(2279-0764)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24203/IJCIT.V9I5.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual Tracking by now has gained much provenience among researchers in recent years due to its vast variety of applications that occur in daily life. Various applications of visual tracking include counting of cars on a high way, analyzing the crowd intensity in a concert or a football ground or a surveillance camera tracking a single person to track its movements. Various techniques have been proposed and implemented in this research domain where researchers have analyzed various parameters. Still this area has a lot to offer. There are two common approaches that are currently deployed in visual tracking. One is discriminative tracking and the other one is generative tracking. Discriminative tracking requires a pre-trained model that requires the learning of the data and solves the object recognition as a binary classification problem. On the other hand, generative model in tracking makes use of the previous states so that next state can be predicted. In this paper, a novel tacking based on generative tracking method is proposed called as Illumination Inavariant Spatio Temporal Tracker (IISTC). The proposed technique takes into account of the nearby surrounding regions and performs context learning so that the state of the object under consideration and its surrounding regions can be estimated in the next frame. The learning model is deployed both in the spatial domain as well as the temporal domain. Spatial domain part of the tracker takes into consideration the nearby pixels in a frame while the temporal model takes account of the possible change of object location. The proposed tracker was tested on a set of 50 images against other state of the art four trackers. Experimental results reveal that our proposed tracker performs reasonably well as compared with other trackers. The proposed visual tracker is both efficiently with respect to computation power as well as accuracy. The proposed tracker takes only 4 fast Fourier transform computations thus making it reasonably faster. The proposed trackers perform exceptionally well when there is a sudden change in back ground illumination.
基于时空背景语境学习的快速视觉跟踪
由于视觉跟踪在日常生活中的广泛应用,近年来受到了研究人员的广泛关注。视觉跟踪的各种应用包括高速公路上的车辆计数,分析音乐会或足球场的人群强度,或跟踪单个人的监控摄像头以跟踪其运动。在这个研究领域中,研究人员已经提出并实施了各种技术,并分析了各种参数。尽管如此,这个地区还是有很多东西可以提供。目前在视觉跟踪中有两种常见的方法。一种是判别跟踪,另一种是生成跟踪。判别跟踪需要一个预先训练好的模型,该模型需要对数据进行学习,并将目标识别作为一个二元分类问题来解决。另一方面,跟踪中的生成模型利用之前的状态来预测下一个状态。本文提出了一种基于生成式跟踪的跟踪方法——光照不变时空跟踪器(IISTC)。该技术考虑了附近的周围区域,并进行了上下文学习,以便在下一帧中可以估计所考虑的对象及其周围区域的状态。该学习模型同时部署在空间域和时间域。跟踪器的空间域部分考虑了一帧内附近的像素,而时间模型考虑了目标位置可能发生的变化。提议的跟踪器在一组50图像上与其他最先进的四个跟踪器进行了测试。实验结果表明,与其他跟踪器相比,我们提出的跟踪器性能相当好。所提出的视觉跟踪器在计算能力和精度上都是高效的。所提出的跟踪器只需要4次快速傅里叶变换计算,从而使其相当快。当背景光照突然变化时,所提出的跟踪器表现异常出色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信