Runjia Li, Junlin Han, Luke Melas-Kyriazi, Chunyi Sun, Zhaochong An, Zhongrui Gui, Shuyang Sun, Philip Torr, Tomas Jakab
{"title":"梦幻野兽利用部分感知知识转移提炼 3D 梦幻动物","authors":"Runjia Li, Junlin Han, Luke Melas-Kyriazi, Chunyi Sun, Zhaochong An, Zhongrui Gui, Shuyang Sun, Philip Torr, Tomas Jakab","doi":"arxiv-2409.08271","DOIUrl":null,"url":null,"abstract":"We present DreamBeast, a novel method based on score distillation sampling\n(SDS) for generating fantastical 3D animal assets composed of distinct parts.\nExisting SDS methods often struggle with this generation task due to a limited\nunderstanding of part-level semantics in text-to-image diffusion models. While\nrecent diffusion models, such as Stable Diffusion 3, demonstrate a better\npart-level understanding, they are prohibitively slow and exhibit other common\nproblems associated with single-view diffusion models. DreamBeast overcomes\nthis limitation through a novel part-aware knowledge transfer mechanism. For\neach generated asset, we efficiently extract part-level knowledge from the\nStable Diffusion 3 model into a 3D Part-Affinity implicit representation. This\nenables us to instantly generate Part-Affinity maps from arbitrary camera\nviews, which we then use to modulate the guidance of a multi-view diffusion\nmodel during SDS to create 3D assets of fantastical animals. DreamBeast\nsignificantly enhances the quality of generated 3D creatures with\nuser-specified part compositions while reducing computational overhead, as\ndemonstrated by extensive quantitative and qualitative evaluations.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DreamBeast: Distilling 3D Fantastical Animals with Part-Aware Knowledge Transfer\",\"authors\":\"Runjia Li, Junlin Han, Luke Melas-Kyriazi, Chunyi Sun, Zhaochong An, Zhongrui Gui, Shuyang Sun, Philip Torr, Tomas Jakab\",\"doi\":\"arxiv-2409.08271\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present DreamBeast, a novel method based on score distillation sampling\\n(SDS) for generating fantastical 3D animal assets composed of distinct parts.\\nExisting SDS methods often struggle with this generation task due to a limited\\nunderstanding of part-level semantics in text-to-image diffusion models. While\\nrecent diffusion models, such as Stable Diffusion 3, demonstrate a better\\npart-level understanding, they are prohibitively slow and exhibit other common\\nproblems associated with single-view diffusion models. DreamBeast overcomes\\nthis limitation through a novel part-aware knowledge transfer mechanism. For\\neach generated asset, we efficiently extract part-level knowledge from the\\nStable Diffusion 3 model into a 3D Part-Affinity implicit representation. This\\nenables us to instantly generate Part-Affinity maps from arbitrary camera\\nviews, which we then use to modulate the guidance of a multi-view diffusion\\nmodel during SDS to create 3D assets of fantastical animals. DreamBeast\\nsignificantly enhances the quality of generated 3D creatures with\\nuser-specified part compositions while reducing computational overhead, as\\ndemonstrated by extensive quantitative and qualitative evaluations.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08271\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DreamBeast: Distilling 3D Fantastical Animals with Part-Aware Knowledge Transfer
We present DreamBeast, a novel method based on score distillation sampling
(SDS) for generating fantastical 3D animal assets composed of distinct parts.
Existing SDS methods often struggle with this generation task due to a limited
understanding of part-level semantics in text-to-image diffusion models. While
recent diffusion models, such as Stable Diffusion 3, demonstrate a better
part-level understanding, they are prohibitively slow and exhibit other common
problems associated with single-view diffusion models. DreamBeast overcomes
this limitation through a novel part-aware knowledge transfer mechanism. For
each generated asset, we efficiently extract part-level knowledge from the
Stable Diffusion 3 model into a 3D Part-Affinity implicit representation. This
enables us to instantly generate Part-Affinity maps from arbitrary camera
views, which we then use to modulate the guidance of a multi-view diffusion
model during SDS to create 3D assets of fantastical animals. DreamBeast
significantly enhances the quality of generated 3D creatures with
user-specified part compositions while reducing computational overhead, as
demonstrated by extensive quantitative and qualitative evaluations.