不确定VLAD的动作识别

Xianzhong Wang, Hongtao Lu
{"title":"不确定VLAD的动作识别","authors":"Xianzhong Wang, Hongtao Lu","doi":"10.1109/ISCID.2014.238","DOIUrl":null,"url":null,"abstract":"Recognizing human actions in video has gradually attracted much attention in computer vision community, however, it also faces many realistic challenges caused by background clutter, viewpoint changes, variation of actors appearance. These challenges reflect the difficulty of obtaining a clean and discriminative video representation for classification. Recently, VLAD (Vector of Locally Aggregated Descriptors) has shown to be a simple and efficient encoding scheme to obtain discriminative video representations. However, VLAD uses only the nearest visual word in codebook to aggregate each descriptor feature no matter whether it is appropriate or not. Inspired by visual word ambiguity and salience encoding in image classification, we propose Uncertain VLAD (UVLAD) encoding scheme which aggregates each local descriptor feature by considering multiple nearest visual words. The proposed UVLAD scheme ensures each descriptor to be aggregated or discarded appropriately. We evaluate our method on two different benchmark datasets: KTH, and YouTube. Results from experiments show that our encoding scheme outperforms the state-of-arts methods in most cases.","PeriodicalId":385391,"journal":{"name":"2014 Seventh International Symposium on Computational Intelligence and Design","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Action Recognition with Uncertain VLAD\",\"authors\":\"Xianzhong Wang, Hongtao Lu\",\"doi\":\"10.1109/ISCID.2014.238\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recognizing human actions in video has gradually attracted much attention in computer vision community, however, it also faces many realistic challenges caused by background clutter, viewpoint changes, variation of actors appearance. These challenges reflect the difficulty of obtaining a clean and discriminative video representation for classification. Recently, VLAD (Vector of Locally Aggregated Descriptors) has shown to be a simple and efficient encoding scheme to obtain discriminative video representations. However, VLAD uses only the nearest visual word in codebook to aggregate each descriptor feature no matter whether it is appropriate or not. Inspired by visual word ambiguity and salience encoding in image classification, we propose Uncertain VLAD (UVLAD) encoding scheme which aggregates each local descriptor feature by considering multiple nearest visual words. The proposed UVLAD scheme ensures each descriptor to be aggregated or discarded appropriately. We evaluate our method on two different benchmark datasets: KTH, and YouTube. Results from experiments show that our encoding scheme outperforms the state-of-arts methods in most cases.\",\"PeriodicalId\":385391,\"journal\":{\"name\":\"2014 Seventh International Symposium on Computational Intelligence and Design\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 Seventh International Symposium on Computational Intelligence and Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCID.2014.238\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 Seventh International Symposium on Computational Intelligence and Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCID.2014.238","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

视频中人体动作识别逐渐受到计算机视觉界的重视,但同时也面临着背景杂乱、视点变化、演员外貌变化等诸多现实挑战。这些挑战反映了获得用于分类的清晰和有区别的视频表示的困难。近年来,局部聚合描述子向量(Vector of local Aggregated Descriptors, VLAD)被证明是一种简单而有效的编码方案,可以获得有区别的视频表示。然而,VLAD只使用码本中最接近的视觉词来聚合每个描述符特征,而不管它是否合适。受图像分类中视觉词歧义和显著性编码的启发,我们提出了不确定VLAD (UVLAD)编码方案,该方案通过考虑多个最接近的视觉词来聚合每个局部描述子特征。提出的UVLAD方案确保每个描述符被适当地聚合或丢弃。我们在两个不同的基准数据集上评估我们的方法:KTH和YouTube。实验结果表明,我们的编码方案在大多数情况下都优于目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Action Recognition with Uncertain VLAD
Recognizing human actions in video has gradually attracted much attention in computer vision community, however, it also faces many realistic challenges caused by background clutter, viewpoint changes, variation of actors appearance. These challenges reflect the difficulty of obtaining a clean and discriminative video representation for classification. Recently, VLAD (Vector of Locally Aggregated Descriptors) has shown to be a simple and efficient encoding scheme to obtain discriminative video representations. However, VLAD uses only the nearest visual word in codebook to aggregate each descriptor feature no matter whether it is appropriate or not. Inspired by visual word ambiguity and salience encoding in image classification, we propose Uncertain VLAD (UVLAD) encoding scheme which aggregates each local descriptor feature by considering multiple nearest visual words. The proposed UVLAD scheme ensures each descriptor to be aggregated or discarded appropriately. We evaluate our method on two different benchmark datasets: KTH, and YouTube. Results from experiments show that our encoding scheme outperforms the state-of-arts methods in most cases.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信