Multi-Object Model-Free Tracking with Joint Appearance and Motion Inference

Chongyu Liu, Rui Yao, S. H. Rezatofighi, I. Reid, Javen Qinfeng Shi
{"title":"Multi-Object Model-Free Tracking with Joint Appearance and Motion Inference","authors":"Chongyu Liu, Rui Yao, S. H. Rezatofighi, I. Reid, Javen Qinfeng Shi","doi":"10.1109/DICTA.2017.8227468","DOIUrl":null,"url":null,"abstract":"Multi-object model-free tracking is challenging because the tracker is not aware of the objects' type (not allowed to use object detectors), and needs to distinguish one object from background as well as other similar objects. Most existing methods keep updating their appearance model individually for each target, and their performance is hampered by sudden appearance change and/or occlusion. We propose to use both appearance model and motion model to overcome this issue. We introduce an indicator variable to predict sudden appearance change and occlusion. When they happen, our model stops updating the appearance model to avoid parameter update based on background or incorrect object, and rely more on motion model to track. Moreover, we consider the correlation among all targets, and seek the joint optimal locations for all target simultaneously. We formulate the problem of finding the most likely locations jointly as a graphical model inference problem, and learn the joint parameters for both appearance model and motion model in an online fashion in the framework of LaRank. Experiment results show that our method outperforms the state-of-the-art.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DICTA.2017.8227468","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Multi-object model-free tracking is challenging because the tracker is not aware of the objects' type (not allowed to use object detectors), and needs to distinguish one object from background as well as other similar objects. Most existing methods keep updating their appearance model individually for each target, and their performance is hampered by sudden appearance change and/or occlusion. We propose to use both appearance model and motion model to overcome this issue. We introduce an indicator variable to predict sudden appearance change and occlusion. When they happen, our model stops updating the appearance model to avoid parameter update based on background or incorrect object, and rely more on motion model to track. Moreover, we consider the correlation among all targets, and seek the joint optimal locations for all target simultaneously. We formulate the problem of finding the most likely locations jointly as a graphical model inference problem, and learn the joint parameters for both appearance model and motion model in an online fashion in the framework of LaRank. Experiment results show that our method outperforms the state-of-the-art.
基于关节外观和运动推理的多目标无模型跟踪
多目标无模型跟踪具有挑战性,因为跟踪器不知道对象的类型(不允许使用对象检测器),并且需要将一个对象与背景以及其他类似对象区分开来。大多数现有方法都是针对每个目标单独更新其外观模型,并且它们的性能受到外观突然变化和/或遮挡的影响。我们建议同时使用外观模型和运动模型来克服这一问题。我们引入一个指示变量来预测突然的外观变化和遮挡。当它们发生时,我们的模型停止更新外观模型,以避免基于背景或错误对象的参数更新,而更多地依赖于运动模型进行跟踪。此外,我们还考虑了所有目标之间的相关性,同时寻求所有目标的联合最优位置。我们将寻找最可能位置的问题共同表述为一个图形模型推理问题,并在LaRank框架下在线学习外观模型和运动模型的联合参数。实验结果表明,我们的方法优于目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信