{"title":"Real-Time Human Falling Recognition via Spatial and Temporal Self-Attention Augmented Graph Convolutional Network","authors":"Jiayao Yuan, Chengju Liu, Chuangwei Liu, Liuyi Wang, Qi Chen","doi":"10.1109/RCAR54675.2022.9872276","DOIUrl":null,"url":null,"abstract":"Currently, the skeleton-based human action recognition (e.g. walking, sitting and falling down) has achieved great interest, because the skeleton graph is robust to complex background and illumination changes compared to images. In this paper, a complete solution to real-time falling recognition task for intelligent monitoring has been provided. First, a manually annotated skeleton dataset for falling down action recognition is published. Then, a real-time self-attention augmented graph convolutional network (ST-SAGCN) is proposed. The network contains two novel architectures: a spatial self-attention module and a temporal self-attention module, which can effectively learn intra-frame correlations between different body parts, and inter-frame correlations between different frames for each joint. Finally, extensive comparative experiments on the dataset have proven that the proposed model can achieve remarkable improvement on falling recognition task. When the model is deployed in intelligent monitoring system, it achieves an inference speed over 40 fps and meets the demand of practical applications.","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RCAR54675.2022.9872276","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Currently, the skeleton-based human action recognition (e.g. walking, sitting and falling down) has achieved great interest, because the skeleton graph is robust to complex background and illumination changes compared to images. In this paper, a complete solution to real-time falling recognition task for intelligent monitoring has been provided. First, a manually annotated skeleton dataset for falling down action recognition is published. Then, a real-time self-attention augmented graph convolutional network (ST-SAGCN) is proposed. The network contains two novel architectures: a spatial self-attention module and a temporal self-attention module, which can effectively learn intra-frame correlations between different body parts, and inter-frame correlations between different frames for each joint. Finally, extensive comparative experiments on the dataset have proven that the proposed model can achieve remarkable improvement on falling recognition task. When the model is deployed in intelligent monitoring system, it achieves an inference speed over 40 fps and meets the demand of practical applications.