无监督动作分类的联合矩阵分解方法

Peng Cui, Fei Wang, Lifeng Sun, Shiqiang Yang
{"title":"无监督动作分类的联合矩阵分解方法","authors":"Peng Cui, Fei Wang, Lifeng Sun, Shiqiang Yang","doi":"10.1109/ICDM.2008.59","DOIUrl":null,"url":null,"abstract":"In this paper, a novel unsupervised approach to mining categories from action video sequences is presented. This approach consists of two modules: action representation and learning model. Videos are regarded as spatially distributed dynamic pixel time series, which are quantized into pixel prototypes. After replacing the pixel time series with their corresponding prototype labels, the video sequences are compressed into 2D action matrices. We put these matrices together to form an multi-action tensor, and propose the joint matrix factorization method to simultaneously cluster the pixel prototypes into pixel signatures, and matrices into action classes. The approach is tested on public and popular Weizmann data set, and promising results are achieved.","PeriodicalId":252958,"journal":{"name":"2008 Eighth IEEE International Conference on Data Mining","volume":"8 4-5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A Joint Matrix Factorization Approach to Unsupervised Action Categorization\",\"authors\":\"Peng Cui, Fei Wang, Lifeng Sun, Shiqiang Yang\",\"doi\":\"10.1109/ICDM.2008.59\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a novel unsupervised approach to mining categories from action video sequences is presented. This approach consists of two modules: action representation and learning model. Videos are regarded as spatially distributed dynamic pixel time series, which are quantized into pixel prototypes. After replacing the pixel time series with their corresponding prototype labels, the video sequences are compressed into 2D action matrices. We put these matrices together to form an multi-action tensor, and propose the joint matrix factorization method to simultaneously cluster the pixel prototypes into pixel signatures, and matrices into action classes. The approach is tested on public and popular Weizmann data set, and promising results are achieved.\",\"PeriodicalId\":252958,\"journal\":{\"name\":\"2008 Eighth IEEE International Conference on Data Mining\",\"volume\":\"8 4-5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 Eighth IEEE International Conference on Data Mining\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDM.2008.59\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Eighth IEEE International Conference on Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDM.2008.59","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

本文提出了一种新的从动作视频序列中挖掘类别的无监督方法。该方法包括两个模块:动作表示和学习模型。将视频视为空间分布的动态像素时间序列,将其量化为像素原型。将像素时间序列替换为对应的原型标签后,将视频序列压缩成二维动作矩阵。我们将这些矩阵组合在一起形成一个多动作张量,并提出联合矩阵分解方法,将像素原型聚类为像素签名,将矩阵聚类为动作类。在公开和流行的Weizmann数据集上对该方法进行了测试,取得了令人满意的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Joint Matrix Factorization Approach to Unsupervised Action Categorization
In this paper, a novel unsupervised approach to mining categories from action video sequences is presented. This approach consists of two modules: action representation and learning model. Videos are regarded as spatially distributed dynamic pixel time series, which are quantized into pixel prototypes. After replacing the pixel time series with their corresponding prototype labels, the video sequences are compressed into 2D action matrices. We put these matrices together to form an multi-action tensor, and propose the joint matrix factorization method to simultaneously cluster the pixel prototypes into pixel signatures, and matrices into action classes. The approach is tested on public and popular Weizmann data set, and promising results are achieved.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信