Robust Skeleton-based Action Recognition through Hierarchical Aggregation of Local and Global Spatio-temporal Features

Jun Ren, N. Reyes, A. Barczak, C. Scogings, Mingzhe Liu, Jing Ma
{"title":"Robust Skeleton-based Action Recognition through Hierarchical Aggregation of Local and Global Spatio-temporal Features","authors":"Jun Ren, N. Reyes, A. Barczak, C. Scogings, Mingzhe Liu, Jing Ma","doi":"10.1109/ICARCV.2018.8581141","DOIUrl":null,"url":null,"abstract":"Recognizing human actions based on 3D skeleton data, commonly referred to as 3D action recognition, is fast gaining interest from the scientific community recently, because this approach presents a robust, compact and a perspective-invariant representation of motion data. Recent attempts on this problem proposed the development of RNN-based learning methods to model the temporal dependency in the sequential data. In this paper, we extend this idea to a hierarchical spatio-temporal domains to exploit the local and global features embedded in the long skeleton sequence. We introduce a novel temporal-contextual recurrent layer to learn the local features from consecutive frames and then to aggregate the extracted features hierarchically, refining the sequence representation layer by layer. Our method achieves competitive performance on 3 popular benchmark datasets for 3D human action analysis.","PeriodicalId":395380,"journal":{"name":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICARCV.2018.8581141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Recognizing human actions based on 3D skeleton data, commonly referred to as 3D action recognition, is fast gaining interest from the scientific community recently, because this approach presents a robust, compact and a perspective-invariant representation of motion data. Recent attempts on this problem proposed the development of RNN-based learning methods to model the temporal dependency in the sequential data. In this paper, we extend this idea to a hierarchical spatio-temporal domains to exploit the local and global features embedded in the long skeleton sequence. We introduce a novel temporal-contextual recurrent layer to learn the local features from consecutive frames and then to aggregate the extracted features hierarchically, refining the sequence representation layer by layer. Our method achieves competitive performance on 3 popular benchmark datasets for 3D human action analysis.
基于局部和全局时空特征分层聚合的鲁棒骨架动作识别
基于三维骨骼数据的人体动作识别,通常被称为三维动作识别,最近迅速引起了科学界的兴趣,因为这种方法提供了一种鲁棒性、紧凑性和视角不变的运动数据表示。最近对这个问题的尝试提出了基于rnn的学习方法的发展,以模拟序列数据中的时间依赖性。在本文中,我们将这一思想扩展到一个分层的时空域,以利用嵌入在长骨架序列中的局部和全局特征。我们引入了一种新的时间-上下文循环层,从连续帧中学习局部特征,然后对提取的特征进行分层聚合,逐层细化序列表示。我们的方法在3个流行的三维人体动作分析基准数据集上取得了具有竞争力的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信