POPDG: Popular 3D Dance Generation with PopDanceSet

Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, Li Yao
{"title":"POPDG: Popular 3D Dance Generation with PopDanceSet","authors":"Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, Li Yao","doi":"arxiv-2405.03178","DOIUrl":null,"url":null,"abstract":"Generating dances that are both lifelike and well-aligned with music\ncontinues to be a challenging task in the cross-modal domain. This paper\nintroduces PopDanceSet, the first dataset tailored to the preferences of young\naudiences, enabling the generation of aesthetically oriented dances. And it\nsurpasses the AIST++ dataset in music genre diversity and the intricacy and\ndepth of dance movements. Moreover, the proposed POPDG model within the iDDPM\nframework enhances dance diversity and, through the Space Augmentation\nAlgorithm, strengthens spatial physical connections between human body joints,\nensuring that increased diversity does not compromise generation quality. A\nstreamlined Alignment Module is also designed to improve the temporal alignment\nbetween dance and music. Extensive experiments show that POPDG achieves SOTA\nresults on two datasets. Furthermore, the paper also expands on current\nevaluation metrics. The dataset and code are available at\nhttps://github.com/Luke-Luo1/POPDG.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.03178","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Generating dances that are both lifelike and well-aligned with music continues to be a challenging task in the cross-modal domain. This paper introduces PopDanceSet, the first dataset tailored to the preferences of young audiences, enabling the generation of aesthetically oriented dances. And it surpasses the AIST++ dataset in music genre diversity and the intricacy and depth of dance movements. Moreover, the proposed POPDG model within the iDDPM framework enhances dance diversity and, through the Space Augmentation Algorithm, strengthens spatial physical connections between human body joints, ensuring that increased diversity does not compromise generation quality. A streamlined Alignment Module is also designed to improve the temporal alignment between dance and music. Extensive experiments show that POPDG achieves SOTA results on two datasets. Furthermore, the paper also expands on current evaluation metrics. The dataset and code are available at https://github.com/Luke-Luo1/POPDG.
POPDG:使用 PopDanceSet 生成流行 3D 舞蹈
在跨模态领域,生成既逼真又与音乐完美契合的舞蹈仍然是一项具有挑战性的任务。本文介绍了 PopDanceSet,它是第一个根据年轻观众的喜好量身定制的数据集,可以生成以美学为导向的舞蹈。该数据集在音乐流派多样性以及舞蹈动作的复杂性和深度方面超越了 AIST++ 数据集。此外,在 iDDPM 框架内提出的 POPDG 模型增强了舞蹈的多样性,并通过空间增强算法加强了人体关节之间的空间物理连接,确保在增加多样性的同时不影响生成质量。此外,还设计了一个简化的对齐模块,以改进舞蹈与音乐之间的时间对齐。广泛的实验表明,POPDG 在两个数据集上取得了 SOTA 的结果。此外,本文还扩展了当前的评估指标。数据集和代码可在https://github.com/Luke-Luo1/POPDG。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信