{"title":"MSTDF: Motion Style Transfer Towards High Visual Fidelity Based on Dynamic Fusion","authors":"Ziyun Qian;Dingkang Yang;Mingcheng Li;Zeyu Xiao;Lihua Zhang","doi":"10.1109/LSP.2025.3546885","DOIUrl":null,"url":null,"abstract":"Emotion-guided motion style transfer is a novel research direction, enabling the efficient generation of motion in various emotional styles for use in films, games, and other domains. However, existing methods primarily rely on global feature statistics for motion style transfer, neglecting local semantic structure and resulting in the degradation of motion content structure. This letter proposes a novel Motion Style Transfer based on Dynamic Fusion (MSTDF) framework, which treats content and style motion as distinct signals and employs dynamic fusion for high-fidelity motion style transfer. Additionally, to address the challenge of traditional discriminators capturing subtle motion style features, we propose the Motion Dynamic Fusion (MDF) discriminator to capture the details and fine-grained style characteristics of motion sequences, assisting the generator in producing higher-fidelity stylized motion. Finally, extensive experiments on the Xia dataset demonstrate that our method surpasses state-of-the-art methods in qualitative and quantitative comparisons.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"1101-1105"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10908584/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
MSTDF: Motion Style Transfer Towards High Visual Fidelity Based on Dynamic Fusion
Emotion-guided motion style transfer is a novel research direction, enabling the efficient generation of motion in various emotional styles for use in films, games, and other domains. However, existing methods primarily rely on global feature statistics for motion style transfer, neglecting local semantic structure and resulting in the degradation of motion content structure. This letter proposes a novel Motion Style Transfer based on Dynamic Fusion (MSTDF) framework, which treats content and style motion as distinct signals and employs dynamic fusion for high-fidelity motion style transfer. Additionally, to address the challenge of traditional discriminators capturing subtle motion style features, we propose the Motion Dynamic Fusion (MDF) discriminator to capture the details and fine-grained style characteristics of motion sequences, assisting the generator in producing higher-fidelity stylized motion. Finally, extensive experiments on the Xia dataset demonstrate that our method surpasses state-of-the-art methods in qualitative and quantitative comparisons.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.