{"title":"Multi-spatial Semantic Information Aggregation Network for 3D Human Motion Prediction","authors":"Dong He , Jianqi Zhong , Jianhua Ji , Wenming Cao","doi":"10.1016/j.aiopen.2025.08.002","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, GCN-based methods have achieved great success in skeleton-based human motion prediction tasks due to the human body graph structure. However, existing methods leveraged single semantic information to model the whole motion sequence, which cannot fully exploit the motion dependencies. To tackle this issue, we propose a Multi-spatial Semantic Information Aggregation Network(MSIAN) to enrich the semantic information by focusing on the local spatial structure of the human skeleton. MSIAN includes the Graph-based Feature Extraction and Aggregation Block (GFEAB), where the Integration Graph combines local and global attention to extract spatial features, the Gravity-Centered Graph (GCG) captures the state of each joint by treating the central joint of the skeleton as the center of gravity, and the Spatial Position Graph (SPG) fully utilizes the original joint positions to analyze movements. Extensive experiments show that our proposed MSIAN outperforms the current state-of-the-art methods on Human3.6M, 3DPW, and AMASS datasets. Our code is available at <span><span>https://github.com/HDdong-hub/MSIAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 155-166"},"PeriodicalIF":14.8000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666651025000117","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, GCN-based methods have achieved great success in skeleton-based human motion prediction tasks due to the human body graph structure. However, existing methods leveraged single semantic information to model the whole motion sequence, which cannot fully exploit the motion dependencies. To tackle this issue, we propose a Multi-spatial Semantic Information Aggregation Network(MSIAN) to enrich the semantic information by focusing on the local spatial structure of the human skeleton. MSIAN includes the Graph-based Feature Extraction and Aggregation Block (GFEAB), where the Integration Graph combines local and global attention to extract spatial features, the Gravity-Centered Graph (GCG) captures the state of each joint by treating the central joint of the skeleton as the center of gravity, and the Spatial Position Graph (SPG) fully utilizes the original joint positions to analyze movements. Extensive experiments show that our proposed MSIAN outperforms the current state-of-the-art methods on Human3.6M, 3DPW, and AMASS datasets. Our code is available at https://github.com/HDdong-hub/MSIAN.