基于广义动态深度数据匹配的动作检索

Lujun Chen, H. Yao, Xiaoshuai Sun
{"title":"基于广义动态深度数据匹配的动作检索","authors":"Lujun Chen, H. Yao, Xiaoshuai Sun","doi":"10.1109/VCIP.2012.6410774","DOIUrl":null,"url":null,"abstract":"With the great popularity and extensive application of Kinect, the Internet is sharing more and more depth data. To effectively use plenty of depth data would make great sense. In this paper, we propose a generalized dynamic depth data matching framework for action retrieval. Firstly we focus on single depth image matching utilizing both depth and shape feature. The depth feature used in our method is straightforward but proved to be very effective and robust for distinguishing various human actions. Then, we adopt shape context, which is widely used in shape matching, in order to strengthen the robustness of our matching strategy. Finally, we utilize Dynamic Time Warping to measure temporal similarity between two depth video sequences. Experiments based on a dataset of 17 classes of actions from 10 different individuals demonstrate the effectiveness and robustness of our proposed matching strategy.","PeriodicalId":103073,"journal":{"name":"2012 Visual Communications and Image Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Action retrieval based on generalized dynamic depth data matching\",\"authors\":\"Lujun Chen, H. Yao, Xiaoshuai Sun\",\"doi\":\"10.1109/VCIP.2012.6410774\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the great popularity and extensive application of Kinect, the Internet is sharing more and more depth data. To effectively use plenty of depth data would make great sense. In this paper, we propose a generalized dynamic depth data matching framework for action retrieval. Firstly we focus on single depth image matching utilizing both depth and shape feature. The depth feature used in our method is straightforward but proved to be very effective and robust for distinguishing various human actions. Then, we adopt shape context, which is widely used in shape matching, in order to strengthen the robustness of our matching strategy. Finally, we utilize Dynamic Time Warping to measure temporal similarity between two depth video sequences. Experiments based on a dataset of 17 classes of actions from 10 different individuals demonstrate the effectiveness and robustness of our proposed matching strategy.\",\"PeriodicalId\":103073,\"journal\":{\"name\":\"2012 Visual Communications and Image Processing\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 Visual Communications and Image Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP.2012.6410774\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Visual Communications and Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP.2012.6410774","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

随着Kinect的普及和广泛应用,互联网正在共享越来越多的深度数据。有效地利用大量深度数据是很有意义的。本文提出了一种用于动作检索的广义动态深度数据匹配框架。首先,我们将重点放在利用深度和形状特征的单深度图像匹配上。在我们的方法中使用的深度特征是直接的,但被证明是非常有效和鲁棒的区分各种人类行为。然后,为了增强匹配策略的鲁棒性,我们采用了在形状匹配中广泛使用的形状上下文。最后,我们利用动态时间扭曲来测量两个深度视频序列之间的时间相似性。基于10个不同个体的17类动作数据集的实验证明了我们所提出的匹配策略的有效性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Action retrieval based on generalized dynamic depth data matching
With the great popularity and extensive application of Kinect, the Internet is sharing more and more depth data. To effectively use plenty of depth data would make great sense. In this paper, we propose a generalized dynamic depth data matching framework for action retrieval. Firstly we focus on single depth image matching utilizing both depth and shape feature. The depth feature used in our method is straightforward but proved to be very effective and robust for distinguishing various human actions. Then, we adopt shape context, which is widely used in shape matching, in order to strengthen the robustness of our matching strategy. Finally, we utilize Dynamic Time Warping to measure temporal similarity between two depth video sequences. Experiments based on a dataset of 17 classes of actions from 10 different individuals demonstrate the effectiveness and robustness of our proposed matching strategy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信