基于标记潜狄利克雷分配模型的人体动作识别

Jiahui Yang, Changhong Chen, Z. Gan, Xiuchang Zhu
{"title":"基于标记潜狄利克雷分配模型的人体动作识别","authors":"Jiahui Yang, Changhong Chen, Z. Gan, Xiuchang Zhu","doi":"10.1109/WCSP.2013.6677264","DOIUrl":null,"url":null,"abstract":"Recognition of human actions has already been an active area in the computer vision domain and techniques related to action recognition have been applied in plenty of fields such as smart surveillance, motion analysis and virtual reality. In this paper, we propose a new action recognition method which represents human actions as a bag of spatio-temporal words extracted from input video sequences and uses L-LDA (labeled Latent Dirichlet Allocation) model as a classifier. L-LDA is a supervised model extended from LDA which is unsupervised. The L-LDA adds a label layer on the basis of LDA to label the category of the train video sequences, so L-LDA can assign the latent topic variable in the model to the specific action categorization automatically. What's more, due to above characteristic of L-LDA, it can help to estimate the model parameters more reasonably, accurately and fast. We test our method on the KTH and Weizmann human action dataset and the experimental results show that L-LDA is better than its unsupervised counterpart LDA as well as SVMs (support vector machines).","PeriodicalId":342639,"journal":{"name":"2013 International Conference on Wireless Communications and Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human action recognition using labeled Latent Dirichlet Allocation model\",\"authors\":\"Jiahui Yang, Changhong Chen, Z. Gan, Xiuchang Zhu\",\"doi\":\"10.1109/WCSP.2013.6677264\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recognition of human actions has already been an active area in the computer vision domain and techniques related to action recognition have been applied in plenty of fields such as smart surveillance, motion analysis and virtual reality. In this paper, we propose a new action recognition method which represents human actions as a bag of spatio-temporal words extracted from input video sequences and uses L-LDA (labeled Latent Dirichlet Allocation) model as a classifier. L-LDA is a supervised model extended from LDA which is unsupervised. The L-LDA adds a label layer on the basis of LDA to label the category of the train video sequences, so L-LDA can assign the latent topic variable in the model to the specific action categorization automatically. What's more, due to above characteristic of L-LDA, it can help to estimate the model parameters more reasonably, accurately and fast. We test our method on the KTH and Weizmann human action dataset and the experimental results show that L-LDA is better than its unsupervised counterpart LDA as well as SVMs (support vector machines).\",\"PeriodicalId\":342639,\"journal\":{\"name\":\"2013 International Conference on Wireless Communications and Signal Processing\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 International Conference on Wireless Communications and Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WCSP.2013.6677264\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Wireless Communications and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCSP.2013.6677264","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人体动作识别已经成为计算机视觉领域的一个活跃领域,动作识别相关技术已经在智能监控、运动分析、虚拟现实等诸多领域得到了应用。本文提出了一种新的动作识别方法,该方法将人类动作表示为从输入视频序列中提取的一组时空词,并使用L-LDA (labeled Latent Dirichlet Allocation)模型作为分类器。L-LDA是由无监督的LDA扩展而来的监督模型。L-LDA在LDA的基础上增加标签层对列车视频序列的类别进行标注,这样L-LDA就可以自动将模型中的潜在主题变量分配给特定的动作分类。此外,由于L-LDA的上述特性,它可以更合理、准确、快速地估计模型参数。我们在KTH和Weizmann人类动作数据集上测试了我们的方法,实验结果表明,L-LDA优于无监督LDA和支持向量机(svm)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Human action recognition using labeled Latent Dirichlet Allocation model
Recognition of human actions has already been an active area in the computer vision domain and techniques related to action recognition have been applied in plenty of fields such as smart surveillance, motion analysis and virtual reality. In this paper, we propose a new action recognition method which represents human actions as a bag of spatio-temporal words extracted from input video sequences and uses L-LDA (labeled Latent Dirichlet Allocation) model as a classifier. L-LDA is a supervised model extended from LDA which is unsupervised. The L-LDA adds a label layer on the basis of LDA to label the category of the train video sequences, so L-LDA can assign the latent topic variable in the model to the specific action categorization automatically. What's more, due to above characteristic of L-LDA, it can help to estimate the model parameters more reasonably, accurately and fast. We test our method on the KTH and Weizmann human action dataset and the experimental results show that L-LDA is better than its unsupervised counterpart LDA as well as SVMs (support vector machines).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信