SlowFast-GCN: A Novel Skeleton-Based Action Recognition Framework

Cheng-Hung Lin, Po-Yung Chou, Cheng-Hsien Lin, Min-Yen Tsai
{"title":"SlowFast-GCN: A Novel Skeleton-Based Action Recognition Framework","authors":"Cheng-Hung Lin, Po-Yung Chou, Cheng-Hsien Lin, Min-Yen Tsai","doi":"10.1109/ICPAI51961.2020.00039","DOIUrl":null,"url":null,"abstract":"Human action recognition plays an important role in video surveillance, human-computer interaction, video understanding, and virtual reality. Different from two-dimensional object recognition, human action recognition is a dynamic object recognition with a time series relationship, and it faces many challenges from complex environments, such as color shift, light and shadow changes, and sampling angles. In order to improve the accuracy of human action recognition, many studies have proposed skeleton-based action recognition methods that are not affected by the background, but the current framework does not have much discussion on the integration of the time dimension.In this paper, we propose a novel SlowFast-GCN framework which combines the advantages of ST-GCN and SlowFastNet with dynamic human skeleton to improve the accuracy of human action recognition. The proposed framework uses two streams, one stream captures fine-grained motion changes, and the other stream captures static semantics. Through these two streams, we can merge the human skeleton features from two different time dimensions. Experimental results show that the proposed framework outperforms to state-of-the-art approaches on the NTU-RGBD dataset.","PeriodicalId":330198,"journal":{"name":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Pervasive Artificial Intelligence (ICPAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPAI51961.2020.00039","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Human action recognition plays an important role in video surveillance, human-computer interaction, video understanding, and virtual reality. Different from two-dimensional object recognition, human action recognition is a dynamic object recognition with a time series relationship, and it faces many challenges from complex environments, such as color shift, light and shadow changes, and sampling angles. In order to improve the accuracy of human action recognition, many studies have proposed skeleton-based action recognition methods that are not affected by the background, but the current framework does not have much discussion on the integration of the time dimension.In this paper, we propose a novel SlowFast-GCN framework which combines the advantages of ST-GCN and SlowFastNet with dynamic human skeleton to improve the accuracy of human action recognition. The proposed framework uses two streams, one stream captures fine-grained motion changes, and the other stream captures static semantics. Through these two streams, we can merge the human skeleton features from two different time dimensions. Experimental results show that the proposed framework outperforms to state-of-the-art approaches on the NTU-RGBD dataset.
SlowFast-GCN:一种新的基于骨架的动作识别框架
人体动作识别在视频监控、人机交互、视频理解、虚拟现实等领域发挥着重要作用。与二维物体识别不同,人体动作识别是一种具有时间序列关系的动态物体识别,它面临着来自复杂环境的诸多挑战,如色彩偏移、光影变化、采样角度等。为了提高人体动作识别的准确性,许多研究提出了不受背景影响的基于骨架的动作识别方法,但目前的框架对时间维度的整合讨论不多。本文将ST-GCN和SlowFastNet的优点与动态人体骨架相结合,提出了一种新的慢速- gcn框架,以提高人体动作识别的准确性。提议的框架使用两个流,一个流捕获细粒度的运动变化,另一个流捕获静态语义。通过这两个流,我们可以从两个不同的时间维度合并人体骨骼特征。实验结果表明,该框架在NTU-RGBD数据集上优于目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信