Yiming Shao , Lintao Mao , Leixiong Ye , Jincheng Li , Ping Yang , Chengtao Ji , Zizhao Wu
{"title":"H2GCN:基于骨骼的动作识别混合超图卷积网络","authors":"Yiming Shao , Lintao Mao , Leixiong Ye , Jincheng Li , Ping Yang , Chengtao Ji , Zizhao Wu","doi":"10.1016/j.jksuci.2024.102072","DOIUrl":null,"url":null,"abstract":"<div><p>Recent GCN-based works have achieved remarkable results for skeleton-based human action recognition. Nevertheless, while existing approaches extensively investigate pairwise joint relationships, only a limited number of models explore the intricate, high-order relationships among multiple joints. In this paper, we propose a novel hypergraph convolution method that represents the relationships among multiple joints with hyperedges, and dynamically refines the height-order relationship between hyperedges in the spatial, temporal, and channel dimensions. Specifically, our method initiates with a temporal-channel refinement hypergraph convolutional network, dynamically learning temporal and channel topologies in a data-dependent manner, which facilitates the capture of non-physical structural information inherent in the human body. Furthermore, to model various inter-joint relationships across spatio-temporal dimensions, we propose a spatio-temporal hypergraph joint module, which aims to encapsulate the dynamic spatial–temporal characteristics of the human body. Through the integration of these modules, our proposed model achieves state-of-the-art performance on RGB+D 60 and NTU RGB+D 120 datasets.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824001617/pdfft?md5=fa620a352412bc3a5eed6aa760f3be55&pid=1-s2.0-S1319157824001617-main.pdf","citationCount":"0","resultStr":"{\"title\":\"H2GCN: A hybrid hypergraph convolution network for skeleton-based action recognition\",\"authors\":\"Yiming Shao , Lintao Mao , Leixiong Ye , Jincheng Li , Ping Yang , Chengtao Ji , Zizhao Wu\",\"doi\":\"10.1016/j.jksuci.2024.102072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recent GCN-based works have achieved remarkable results for skeleton-based human action recognition. Nevertheless, while existing approaches extensively investigate pairwise joint relationships, only a limited number of models explore the intricate, high-order relationships among multiple joints. In this paper, we propose a novel hypergraph convolution method that represents the relationships among multiple joints with hyperedges, and dynamically refines the height-order relationship between hyperedges in the spatial, temporal, and channel dimensions. Specifically, our method initiates with a temporal-channel refinement hypergraph convolutional network, dynamically learning temporal and channel topologies in a data-dependent manner, which facilitates the capture of non-physical structural information inherent in the human body. Furthermore, to model various inter-joint relationships across spatio-temporal dimensions, we propose a spatio-temporal hypergraph joint module, which aims to encapsulate the dynamic spatial–temporal characteristics of the human body. Through the integration of these modules, our proposed model achieves state-of-the-art performance on RGB+D 60 and NTU RGB+D 120 datasets.</p></div>\",\"PeriodicalId\":48547,\"journal\":{\"name\":\"Journal of King Saud University-Computer and Information Sciences\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S1319157824001617/pdfft?md5=fa620a352412bc3a5eed6aa760f3be55&pid=1-s2.0-S1319157824001617-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of King Saud University-Computer and Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1319157824001617\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1319157824001617","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
H2GCN: A hybrid hypergraph convolution network for skeleton-based action recognition
Recent GCN-based works have achieved remarkable results for skeleton-based human action recognition. Nevertheless, while existing approaches extensively investigate pairwise joint relationships, only a limited number of models explore the intricate, high-order relationships among multiple joints. In this paper, we propose a novel hypergraph convolution method that represents the relationships among multiple joints with hyperedges, and dynamically refines the height-order relationship between hyperedges in the spatial, temporal, and channel dimensions. Specifically, our method initiates with a temporal-channel refinement hypergraph convolutional network, dynamically learning temporal and channel topologies in a data-dependent manner, which facilitates the capture of non-physical structural information inherent in the human body. Furthermore, to model various inter-joint relationships across spatio-temporal dimensions, we propose a spatio-temporal hypergraph joint module, which aims to encapsulate the dynamic spatial–temporal characteristics of the human body. Through the integration of these modules, our proposed model achieves state-of-the-art performance on RGB+D 60 and NTU RGB+D 120 datasets.
期刊介绍:
In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.