STTG-net:一个基于变压器和图卷积网络的人体运动预测时空网络。

4区 计算机科学 Q1 Arts and Humanities
Lujing Chen, Rui Liu, Xin Yang, Dongsheng Zhou, Qiang Zhang, Xiaopeng Wei
{"title":"STTG-net:一个基于变压器和图卷积网络的人体运动预测时空网络。","authors":"Lujing Chen,&nbsp;Rui Liu,&nbsp;Xin Yang,&nbsp;Dongsheng Zhou,&nbsp;Qiang Zhang,&nbsp;Xiaopeng Wei","doi":"10.1186/s42492-022-00112-5","DOIUrl":null,"url":null,"abstract":"<p><p>In recent years, human motion prediction has become an active research topic in computer vision. However, owing to the complexity and stochastic nature of human motion, it remains a challenging problem. In previous works, human motion prediction has always been treated as a typical inter-sequence problem, and most works have aimed to capture the temporal dependence between successive frames. However, although these approaches focused on the effects of the temporal dimension, they rarely considered the correlation between different joints in space. Thus, the spatio-temporal coupling of human joints is considered, to propose a novel spatio-temporal network based on a transformer and a gragh convolutional network (GCN) (STTG-Net). The temporal transformer is used to capture the global temporal dependencies, and the spatial GCN module is used to establish local spatial correlations between the joints for each frame. To overcome the problems of error accumulation and discontinuity in the motion prediction, a revision method based on fusion strategy is also proposed, in which the current prediction frame is fused with the previous frame. The experimental results show that the proposed prediction method has less prediction error and the prediction motion is smoother than previous prediction methods. The effectiveness of the proposed method is also demonstrated comparing it with the state-of-the-art method on the Human3.6 M dataset.</p>","PeriodicalId":52384,"journal":{"name":"Visual Computing for Industry, Biomedicine, and Art","volume":" ","pages":"19"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9338210/pdf/","citationCount":"2","resultStr":"{\"title\":\"STTG-net: a Spatio-temporal network for human motion prediction based on transformer and graph convolution network.\",\"authors\":\"Lujing Chen,&nbsp;Rui Liu,&nbsp;Xin Yang,&nbsp;Dongsheng Zhou,&nbsp;Qiang Zhang,&nbsp;Xiaopeng Wei\",\"doi\":\"10.1186/s42492-022-00112-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In recent years, human motion prediction has become an active research topic in computer vision. However, owing to the complexity and stochastic nature of human motion, it remains a challenging problem. In previous works, human motion prediction has always been treated as a typical inter-sequence problem, and most works have aimed to capture the temporal dependence between successive frames. However, although these approaches focused on the effects of the temporal dimension, they rarely considered the correlation between different joints in space. Thus, the spatio-temporal coupling of human joints is considered, to propose a novel spatio-temporal network based on a transformer and a gragh convolutional network (GCN) (STTG-Net). The temporal transformer is used to capture the global temporal dependencies, and the spatial GCN module is used to establish local spatial correlations between the joints for each frame. To overcome the problems of error accumulation and discontinuity in the motion prediction, a revision method based on fusion strategy is also proposed, in which the current prediction frame is fused with the previous frame. The experimental results show that the proposed prediction method has less prediction error and the prediction motion is smoother than previous prediction methods. The effectiveness of the proposed method is also demonstrated comparing it with the state-of-the-art method on the Human3.6 M dataset.</p>\",\"PeriodicalId\":52384,\"journal\":{\"name\":\"Visual Computing for Industry, Biomedicine, and Art\",\"volume\":\" \",\"pages\":\"19\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9338210/pdf/\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Visual Computing for Industry, Biomedicine, and Art\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.1186/s42492-022-00112-5\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Computing for Industry, Biomedicine, and Art","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.1186/s42492-022-00112-5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 2

摘要

近年来,人体运动预测已成为计算机视觉领域的一个活跃研究课题。然而,由于人体运动的复杂性和随机性,这仍然是一个具有挑战性的问题。在以往的工作中,人体运动预测一直被视为一个典型的序列间问题,大多数工作旨在捕捉连续帧之间的时间依赖性。然而,尽管这些方法侧重于时间维度的影响,但它们很少考虑不同关节在空间上的相关性。因此,考虑到人体关节的时空耦合,提出了一种基于变压器和图卷积网络(GCN)的新型时空网络(STTG-Net)。时间转换器用于捕获全局时间依赖性,空间GCN模块用于建立每帧关节之间的局部空间相关性。为了克服运动预测中的误差积累和不连续问题,提出了一种基于融合策略的修正方法,将当前预测帧与前一帧进行融合。实验结果表明,该预测方法的预测误差较小,预测运动比以往的预测方法更平滑。在Human3.6 M数据集上与最先进的方法进行了比较,证明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

STTG-net: a Spatio-temporal network for human motion prediction based on transformer and graph convolution network.

STTG-net: a Spatio-temporal network for human motion prediction based on transformer and graph convolution network.

STTG-net: a Spatio-temporal network for human motion prediction based on transformer and graph convolution network.

STTG-net: a Spatio-temporal network for human motion prediction based on transformer and graph convolution network.

In recent years, human motion prediction has become an active research topic in computer vision. However, owing to the complexity and stochastic nature of human motion, it remains a challenging problem. In previous works, human motion prediction has always been treated as a typical inter-sequence problem, and most works have aimed to capture the temporal dependence between successive frames. However, although these approaches focused on the effects of the temporal dimension, they rarely considered the correlation between different joints in space. Thus, the spatio-temporal coupling of human joints is considered, to propose a novel spatio-temporal network based on a transformer and a gragh convolutional network (GCN) (STTG-Net). The temporal transformer is used to capture the global temporal dependencies, and the spatial GCN module is used to establish local spatial correlations between the joints for each frame. To overcome the problems of error accumulation and discontinuity in the motion prediction, a revision method based on fusion strategy is also proposed, in which the current prediction frame is fused with the previous frame. The experimental results show that the proposed prediction method has less prediction error and the prediction motion is smoother than previous prediction methods. The effectiveness of the proposed method is also demonstrated comparing it with the state-of-the-art method on the Human3.6 M dataset.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Visual Computing for Industry, Biomedicine, and Art
Visual Computing for Industry, Biomedicine, and Art Arts and Humanities-Visual Arts and Performing Arts
CiteScore
5.60
自引率
0.00%
发文量
28
审稿时长
5 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信