无监督动作分类的时空相关性

S. Savarese, A. DelPozo, Juan Carlos Niebles, Li Fei-Fei
{"title":"无监督动作分类的时空相关性","authors":"S. Savarese, A. DelPozo, Juan Carlos Niebles, Li Fei-Fei","doi":"10.1109/WMVC.2008.4544068","DOIUrl":null,"url":null,"abstract":"Spatial-temporal local motion features have shown promising results in complex human action classification. Most of the previous works [6],[16],[21] treat these spatial- temporal features as a bag of video words, omitting any long range, global information in either the spatial or temporal domain. Other ways of learning temporal signature of motion tend to impose a fixed trajectory of the features or parts of human body returned by tracking algorithms. This leaves little flexibility for the algorithm to learn the optimal temporal pattern describing these motions. In this paper, we propose the usage of spatial-temporal correlograms to encode flexible long range temporal information into the spatial-temporal motion features. This results into a much richer description of human actions. We then apply an unsupervised generative model to learn different classes of human actions from these ST-correlograms. KTH dataset, one of the most challenging and popular human action dataset, is used for experimental evaluation. Our algorithm achieves the highest classification accuracy reported for this dataset under an unsupervised learning scheme.","PeriodicalId":150666,"journal":{"name":"2008 IEEE Workshop on Motion and video Computing","volume":"201 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"210","resultStr":"{\"title\":\"Spatial-Temporal correlatons for unsupervised action classification\",\"authors\":\"S. Savarese, A. DelPozo, Juan Carlos Niebles, Li Fei-Fei\",\"doi\":\"10.1109/WMVC.2008.4544068\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Spatial-temporal local motion features have shown promising results in complex human action classification. Most of the previous works [6],[16],[21] treat these spatial- temporal features as a bag of video words, omitting any long range, global information in either the spatial or temporal domain. Other ways of learning temporal signature of motion tend to impose a fixed trajectory of the features or parts of human body returned by tracking algorithms. This leaves little flexibility for the algorithm to learn the optimal temporal pattern describing these motions. In this paper, we propose the usage of spatial-temporal correlograms to encode flexible long range temporal information into the spatial-temporal motion features. This results into a much richer description of human actions. We then apply an unsupervised generative model to learn different classes of human actions from these ST-correlograms. KTH dataset, one of the most challenging and popular human action dataset, is used for experimental evaluation. Our algorithm achieves the highest classification accuracy reported for this dataset under an unsupervised learning scheme.\",\"PeriodicalId\":150666,\"journal\":{\"name\":\"2008 IEEE Workshop on Motion and video Computing\",\"volume\":\"201 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-01-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"210\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 IEEE Workshop on Motion and video Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WMVC.2008.4544068\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE Workshop on Motion and video Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WMVC.2008.4544068","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 210

摘要

时空局部运动特征在复杂的人体动作分类中显示出良好的效果。之前的大部分作品[6]、[16]、[21]都将这些时空特征视为一袋视频词,忽略了空间或时间域中的任何长程全局信息。其他学习运动时间特征的方法倾向于强加一个由跟踪算法返回的人体特征或部分的固定轨迹。这给算法学习描述这些运动的最佳时间模式留下了很少的灵活性。在本文中,我们提出使用时空相关图将灵活的远程时间信息编码到时空运动特征中。这使得对人类行为的描述更加丰富。然后,我们应用无监督生成模型从这些st相关图中学习不同类别的人类行为。KTH数据集是最具挑战性和最流行的人类动作数据集之一,用于实验评估。我们的算法在无监督学习方案下实现了该数据集的最高分类精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Spatial-Temporal correlatons for unsupervised action classification
Spatial-temporal local motion features have shown promising results in complex human action classification. Most of the previous works [6],[16],[21] treat these spatial- temporal features as a bag of video words, omitting any long range, global information in either the spatial or temporal domain. Other ways of learning temporal signature of motion tend to impose a fixed trajectory of the features or parts of human body returned by tracking algorithms. This leaves little flexibility for the algorithm to learn the optimal temporal pattern describing these motions. In this paper, we propose the usage of spatial-temporal correlograms to encode flexible long range temporal information into the spatial-temporal motion features. This results into a much richer description of human actions. We then apply an unsupervised generative model to learn different classes of human actions from these ST-correlograms. KTH dataset, one of the most challenging and popular human action dataset, is used for experimental evaluation. Our algorithm achieves the highest classification accuracy reported for this dataset under an unsupervised learning scheme.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信