Xiaoning Sun;Huaijiang Sun;Dong Wei;Jin Wang;Bin Li;Weiqing Li;Jianfeng Lu
{"title":"用于人体运动预测的统一特权知识提炼框架","authors":"Xiaoning Sun;Huaijiang Sun;Dong Wei;Jin Wang;Bin Li;Weiqing Li;Jianfeng Lu","doi":"10.1109/TCSVT.2024.3440488","DOIUrl":null,"url":null,"abstract":"Previous works on human motion prediction follow the pattern of building an extrapolation mapping between the sequence observed and the one to be predicted. However, the inherent difficulty of time-series extrapolation and complexity of human motion data still result in many failure cases. In this paper, we explore a longer horizon of sequence with more poses following behind, which breaks the limit in extrapolation problems that data/information on the other side of the predictive target is completely unknown. As these poses are unavailable for testing, we regard them as a privileged sequence, and propose a Two-stage Privileged Knowledge Distillation framework that incorporates privileged information in the forecasting process while avoiding direct use of it. Specifically, in the first stage, both the observed and privileged sequence are encoded for interpolation, with Privileged-sequence-Encoder (Priv-Encoder) learning privileged knowledge (PK) simultaneously. Then, in the second stage where privileged sequence is not observable, a novel PK-Simulator distills PK by approximating the behavior of Priv-Encoder, but only taking as input the observed sequence, to enable a PK-aware prediction pattern. Moreover, we present a One-stage version of this framework, using Shared Encoder that integrates the observation encoding in both interpolation and prediction branches to realize parallel training, which helps produce the most conducive PK to prediction pipeline. Experimental results show that our frameworks are model-agnostic, and can be applied to existing motion prediction models with encoder-decoder architecture to achieve improved performance.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"12937-12948"},"PeriodicalIF":8.3000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unified Privileged Knowledge Distillation Framework for Human Motion Prediction\",\"authors\":\"Xiaoning Sun;Huaijiang Sun;Dong Wei;Jin Wang;Bin Li;Weiqing Li;Jianfeng Lu\",\"doi\":\"10.1109/TCSVT.2024.3440488\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Previous works on human motion prediction follow the pattern of building an extrapolation mapping between the sequence observed and the one to be predicted. However, the inherent difficulty of time-series extrapolation and complexity of human motion data still result in many failure cases. In this paper, we explore a longer horizon of sequence with more poses following behind, which breaks the limit in extrapolation problems that data/information on the other side of the predictive target is completely unknown. As these poses are unavailable for testing, we regard them as a privileged sequence, and propose a Two-stage Privileged Knowledge Distillation framework that incorporates privileged information in the forecasting process while avoiding direct use of it. Specifically, in the first stage, both the observed and privileged sequence are encoded for interpolation, with Privileged-sequence-Encoder (Priv-Encoder) learning privileged knowledge (PK) simultaneously. Then, in the second stage where privileged sequence is not observable, a novel PK-Simulator distills PK by approximating the behavior of Priv-Encoder, but only taking as input the observed sequence, to enable a PK-aware prediction pattern. Moreover, we present a One-stage version of this framework, using Shared Encoder that integrates the observation encoding in both interpolation and prediction branches to realize parallel training, which helps produce the most conducive PK to prediction pipeline. Experimental results show that our frameworks are model-agnostic, and can be applied to existing motion prediction models with encoder-decoder architecture to achieve improved performance.\",\"PeriodicalId\":13082,\"journal\":{\"name\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"volume\":\"34 12\",\"pages\":\"12937-12948\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Circuits and Systems for Video Technology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10630839/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10630839/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Unified Privileged Knowledge Distillation Framework for Human Motion Prediction
Previous works on human motion prediction follow the pattern of building an extrapolation mapping between the sequence observed and the one to be predicted. However, the inherent difficulty of time-series extrapolation and complexity of human motion data still result in many failure cases. In this paper, we explore a longer horizon of sequence with more poses following behind, which breaks the limit in extrapolation problems that data/information on the other side of the predictive target is completely unknown. As these poses are unavailable for testing, we regard them as a privileged sequence, and propose a Two-stage Privileged Knowledge Distillation framework that incorporates privileged information in the forecasting process while avoiding direct use of it. Specifically, in the first stage, both the observed and privileged sequence are encoded for interpolation, with Privileged-sequence-Encoder (Priv-Encoder) learning privileged knowledge (PK) simultaneously. Then, in the second stage where privileged sequence is not observable, a novel PK-Simulator distills PK by approximating the behavior of Priv-Encoder, but only taking as input the observed sequence, to enable a PK-aware prediction pattern. Moreover, we present a One-stage version of this framework, using Shared Encoder that integrates the observation encoding in both interpolation and prediction branches to realize parallel training, which helps produce the most conducive PK to prediction pipeline. Experimental results show that our frameworks are model-agnostic, and can be applied to existing motion prediction models with encoder-decoder architecture to achieve improved performance.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.