Xingquan Cai, Haoyu Zhang, LiZhe Chen, YiJie Wu, Haiyan Sun
{"title":"3D human pose estimation using spatiotemporal hypergraphs and its public benchmark on opera videos","authors":"Xingquan Cai, Haoyu Zhang, LiZhe Chen, YiJie Wu, Haiyan Sun","doi":"10.1007/s00371-024-03604-y","DOIUrl":null,"url":null,"abstract":"<p>Graph convolutional networks significantly improve the 3D human pose estimation accuracy by representing the human skeleton as an undirected spatiotemporal graph. However, this representation fails to reflect the cross-connection interactions of multiple joints, and the current 3D human pose estimation methods have larger errors in opera videos due to the occlusion of clothing and movements in opera videos. In this paper, we propose a 3D human pose estimation method based on spatiotemporal hypergraphs for opera videos. <i>First, the 2D human pose sequence of the opera video performer is inputted, and based on the interaction information between multiple joints in the opera action, multiple spatiotemporal hypergraphs representing the spatial correlation and temporal continuity of the joints are generated. Then, a hypergraph convolution network is constructed using the joints spatiotemporal hypergraphs to extract the spatiotemporal features in the 2D human poses sequence. Finally, a multi-hypergraph cross-attention mechanism is introduced to strengthen the correlation between spatiotemporal hypergraphs and predict 3D human poses</i>. Experiments show that our method achieves the best performance on the Human3.6M and MPI-INF-3DHP datasets compared to the graph convolutional network and Transformer-based methods. In addition, ablation experiments show that the multiple spatiotemporal hypergraphs we generate can effectively improve the network accuracy compared to the undirected spatiotemporal graph. The experiments demonstrate that the method can obtain accurate 3D human poses in the presence of clothing and limb occlusion in opera videos. Codes will be available at: https://github.com/zhanghaoyu0408/hyperAzzy.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03604-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Graph convolutional networks significantly improve the 3D human pose estimation accuracy by representing the human skeleton as an undirected spatiotemporal graph. However, this representation fails to reflect the cross-connection interactions of multiple joints, and the current 3D human pose estimation methods have larger errors in opera videos due to the occlusion of clothing and movements in opera videos. In this paper, we propose a 3D human pose estimation method based on spatiotemporal hypergraphs for opera videos. First, the 2D human pose sequence of the opera video performer is inputted, and based on the interaction information between multiple joints in the opera action, multiple spatiotemporal hypergraphs representing the spatial correlation and temporal continuity of the joints are generated. Then, a hypergraph convolution network is constructed using the joints spatiotemporal hypergraphs to extract the spatiotemporal features in the 2D human poses sequence. Finally, a multi-hypergraph cross-attention mechanism is introduced to strengthen the correlation between spatiotemporal hypergraphs and predict 3D human poses. Experiments show that our method achieves the best performance on the Human3.6M and MPI-INF-3DHP datasets compared to the graph convolutional network and Transformer-based methods. In addition, ablation experiments show that the multiple spatiotemporal hypergraphs we generate can effectively improve the network accuracy compared to the undirected spatiotemporal graph. The experiments demonstrate that the method can obtain accurate 3D human poses in the presence of clothing and limb occlusion in opera videos. Codes will be available at: https://github.com/zhanghaoyu0408/hyperAzzy.