{"title":"基于骨架的基于时空图的手势识别卷积网络","authors":"Soumya Jituri, Sankalp Balannavar, Shri Nagahari Savanur, Guruprasad Ghaligi, A. Shanbhag, Uday Kulkarni","doi":"10.1109/I2CT57861.2023.10126371","DOIUrl":null,"url":null,"abstract":"In the recent years, recognition of human actions and the interactions of human body bones provide crucial data. It has been applied in many fields from video intelligence to computer vision. The idea behind working of these have a common approach of using deep learning methods that include Convolutional Networks. The Graph convolution networks (GCN) is extensively used in recognition of skeleton action-based data. We point out that current GCN-based methods generally rely on specified graphical patterns (i.e., a hand-crafted structure of the joints in the skeleton), which hinders their potential to gather intricate connections between joints. Thus a better advanced model can be proposed out of the GCN-based model. This paper aims in delivering a novel model of Spatial Temporal Graph Convolutional Networks (ST-GCN) are interactive skeletons that learn from the spatial and temporal variability of input data(ST-GCN) [1]. We here use a large dataset –Kinetics to perform the analysis and predict the output for given skeletal data.","PeriodicalId":150346,"journal":{"name":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Convolutional Networks for Skeleton-Based Gesture Recognition Using Spatial Temporal Graphs\",\"authors\":\"Soumya Jituri, Sankalp Balannavar, Shri Nagahari Savanur, Guruprasad Ghaligi, A. Shanbhag, Uday Kulkarni\",\"doi\":\"10.1109/I2CT57861.2023.10126371\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the recent years, recognition of human actions and the interactions of human body bones provide crucial data. It has been applied in many fields from video intelligence to computer vision. The idea behind working of these have a common approach of using deep learning methods that include Convolutional Networks. The Graph convolution networks (GCN) is extensively used in recognition of skeleton action-based data. We point out that current GCN-based methods generally rely on specified graphical patterns (i.e., a hand-crafted structure of the joints in the skeleton), which hinders their potential to gather intricate connections between joints. Thus a better advanced model can be proposed out of the GCN-based model. This paper aims in delivering a novel model of Spatial Temporal Graph Convolutional Networks (ST-GCN) are interactive skeletons that learn from the spatial and temporal variability of input data(ST-GCN) [1]. We here use a large dataset –Kinetics to perform the analysis and predict the output for given skeletal data.\",\"PeriodicalId\":150346,\"journal\":{\"name\":\"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/I2CT57861.2023.10126371\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 8th International Conference for Convergence in Technology (I2CT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/I2CT57861.2023.10126371","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Convolutional Networks for Skeleton-Based Gesture Recognition Using Spatial Temporal Graphs
In the recent years, recognition of human actions and the interactions of human body bones provide crucial data. It has been applied in many fields from video intelligence to computer vision. The idea behind working of these have a common approach of using deep learning methods that include Convolutional Networks. The Graph convolution networks (GCN) is extensively used in recognition of skeleton action-based data. We point out that current GCN-based methods generally rely on specified graphical patterns (i.e., a hand-crafted structure of the joints in the skeleton), which hinders their potential to gather intricate connections between joints. Thus a better advanced model can be proposed out of the GCN-based model. This paper aims in delivering a novel model of Spatial Temporal Graph Convolutional Networks (ST-GCN) are interactive skeletons that learn from the spatial and temporal variability of input data(ST-GCN) [1]. We here use a large dataset –Kinetics to perform the analysis and predict the output for given skeletal data.