Introducing motion information in dense feature classifiers

Claudiu Tanase, B. Mérialdo
{"title":"Introducing motion information in dense feature classifiers","authors":"Claudiu Tanase, B. Mérialdo","doi":"10.1109/WIAMIS.2013.6616132","DOIUrl":null,"url":null,"abstract":"Semantic concept detection in large scale video collections is mostly achieved through a static analysis of selected keyframes. A popular choice for representing the visual content of an image is based on the pooling of local descriptors such as Dense SIFT. However, simple motion features such as optic flow can be extracted relatively easy from such keyframes. In this paper we propose an efficient addition to the DSIFT approach by including information derived from optic flow. Based on optic flow magnitude, we can estimate for each DSIFT patch whether it is static or moving. We modify the bag of words model used traditionally with DSIFT by creating two separate occurrence histograms instead of one: one for static patches and one for dynamic patches. We further refine this method by studying different separation thresholds and soft assign-ment, as well as different normalization techniques. Classifier score fusion is used to maximize the average precision of all these variants. Experimental results on the TRECVID Semantic Indexing collection show that by means of classifier fusion our method increases overall mean average precision of the DSIFT classifier from 0.061 to 0.106.","PeriodicalId":408077,"journal":{"name":"2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS)","volume":"62 11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 14th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WIAMIS.2013.6616132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Semantic concept detection in large scale video collections is mostly achieved through a static analysis of selected keyframes. A popular choice for representing the visual content of an image is based on the pooling of local descriptors such as Dense SIFT. However, simple motion features such as optic flow can be extracted relatively easy from such keyframes. In this paper we propose an efficient addition to the DSIFT approach by including information derived from optic flow. Based on optic flow magnitude, we can estimate for each DSIFT patch whether it is static or moving. We modify the bag of words model used traditionally with DSIFT by creating two separate occurrence histograms instead of one: one for static patches and one for dynamic patches. We further refine this method by studying different separation thresholds and soft assign-ment, as well as different normalization techniques. Classifier score fusion is used to maximize the average precision of all these variants. Experimental results on the TRECVID Semantic Indexing collection show that by means of classifier fusion our method increases overall mean average precision of the DSIFT classifier from 0.061 to 0.106.
在密集特征分类器中引入运动信息
大规模视频集合中的语义概念检测主要是通过对选定的关键帧进行静态分析来实现的。表示图像视觉内容的一个流行选择是基于局部描述符的池化,例如Dense SIFT。然而,简单的运动特征,如光流,可以相对容易地从这些关键帧提取。在本文中,我们提出了一种有效的DSIFT方法的补充,包括来自光流的信息。根据光流大小,我们可以估计每个DSIFT patch是静态的还是移动的。我们通过创建两个独立的直方图来修改传统DSIFT使用的词袋模型:一个用于静态补丁,一个用于动态补丁。我们通过研究不同的分离阈值和软赋值以及不同的归一化技术来进一步完善该方法。分类器分数融合用于最大化所有这些变量的平均精度。在TRECVID语义索引集上的实验结果表明,通过分类器融合,我们的方法将DSIFT分类器的总体平均精度从0.061提高到0.106。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信