基于图匹配的个体活动识别与定位

Anh-Phuong Ta, Christian Wolf, G. Lavoué, A. Baskurt
{"title":"基于图匹配的个体活动识别与定位","authors":"Anh-Phuong Ta, Christian Wolf, G. Lavoué, A. Baskurt","doi":"10.1109/AVSS.2010.81","DOIUrl":null,"url":null,"abstract":"In this paper we tackle the problem of detecting individualhuman actions in video sequences. While the most successfulmethods are based on local features, which proved thatthey can deal with changes in background, scale and illumination,most existing methods have two main shortcomings:first, they are mainly based on the individual power ofspatio-temporal interest points (STIP), and therefore ignorethe spatio-temporal relationships between them. Second,these methods mainly focus on direct classification techniquesto classify the human activities, as opposed to detectionand localization. In order to overcome these limitations,we propose a new approach, which is based on agraph matching algorithm for activity recognition. In contrastto most previous methods which classify entire videosequences, we design a video matching method from twosets of ST-points for human activity recognition. First,points are extracted, and a hyper graphs are constructedfrom them, i.e. graphs with edges involving more than 2nodes (3 in our case). The activity recognition problemis then transformed into a problem of finding instances ofmodel graphs in the scene graph. By matching local featuresinstead of classifying entire sequences, our methodis able to detect multiple different activities which occursimultaneously in a video sequence. Experiments on twostandard datasets demonstrate that our method is comparableto the existing techniques on classification, and that itcan, additionally, detect and localize activities.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"92 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":"{\"title\":\"Recognizing and Localizing Individual Activities through Graph Matching\",\"authors\":\"Anh-Phuong Ta, Christian Wolf, G. Lavoué, A. Baskurt\",\"doi\":\"10.1109/AVSS.2010.81\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we tackle the problem of detecting individualhuman actions in video sequences. While the most successfulmethods are based on local features, which proved thatthey can deal with changes in background, scale and illumination,most existing methods have two main shortcomings:first, they are mainly based on the individual power ofspatio-temporal interest points (STIP), and therefore ignorethe spatio-temporal relationships between them. Second,these methods mainly focus on direct classification techniquesto classify the human activities, as opposed to detectionand localization. In order to overcome these limitations,we propose a new approach, which is based on agraph matching algorithm for activity recognition. In contrastto most previous methods which classify entire videosequences, we design a video matching method from twosets of ST-points for human activity recognition. First,points are extracted, and a hyper graphs are constructedfrom them, i.e. graphs with edges involving more than 2nodes (3 in our case). The activity recognition problemis then transformed into a problem of finding instances ofmodel graphs in the scene graph. By matching local featuresinstead of classifying entire sequences, our methodis able to detect multiple different activities which occursimultaneously in a video sequence. Experiments on twostandard datasets demonstrate that our method is comparableto the existing techniques on classification, and that itcan, additionally, detect and localize activities.\",\"PeriodicalId\":415758,\"journal\":{\"name\":\"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance\",\"volume\":\"92 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AVSS.2010.81\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2010.81","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32

摘要

本文主要研究视频序列中人类个体行为的检测问题。虽然基于局部特征的方法是最成功的方法,证明了它们可以处理背景、尺度和光照的变化,但现有的方法大多存在两个主要缺点:首先,它们主要基于时空兴趣点(STIP)的个体功率,因此忽略了它们之间的时空关系。其次,这些方法主要侧重于直接分类技术来对人类活动进行分类,而不是检测和定位。为了克服这些局限性,我们提出了一种基于图匹配算法的活动识别新方法。与以往大多数对整个视频序列进行分类的方法不同,我们设计了一种基于两组st点的视频匹配方法用于人体活动识别。首先,提取点,并从中构建一个超图,即边缘涉及超过2个节点(在我们的例子中是3个)的图。然后将活动识别问题转化为在场景图中寻找模型图实例的问题。通过匹配局部特征而不是对整个序列进行分类,我们的方法能够检测同时发生在视频序列中的多个不同活动。在两个标准数据集上的实验表明,我们的方法在分类方面与现有技术相当,并且可以检测和定位活动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Recognizing and Localizing Individual Activities through Graph Matching
In this paper we tackle the problem of detecting individualhuman actions in video sequences. While the most successfulmethods are based on local features, which proved thatthey can deal with changes in background, scale and illumination,most existing methods have two main shortcomings:first, they are mainly based on the individual power ofspatio-temporal interest points (STIP), and therefore ignorethe spatio-temporal relationships between them. Second,these methods mainly focus on direct classification techniquesto classify the human activities, as opposed to detectionand localization. In order to overcome these limitations,we propose a new approach, which is based on agraph matching algorithm for activity recognition. In contrastto most previous methods which classify entire videosequences, we design a video matching method from twosets of ST-points for human activity recognition. First,points are extracted, and a hyper graphs are constructedfrom them, i.e. graphs with edges involving more than 2nodes (3 in our case). The activity recognition problemis then transformed into a problem of finding instances ofmodel graphs in the scene graph. By matching local featuresinstead of classifying entire sequences, our methodis able to detect multiple different activities which occursimultaneously in a video sequence. Experiments on twostandard datasets demonstrate that our method is comparableto the existing techniques on classification, and that itcan, additionally, detect and localize activities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信