Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo
{"title":"VolumeDiffusion:前馈文本到3d生成与高效的体积编码器","authors":"Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo","doi":"10.1016/j.gmod.2025.101274","DOIUrl":null,"url":null,"abstract":"<div><div>This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101274"},"PeriodicalIF":2.5000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder\",\"authors\":\"Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo\",\"doi\":\"10.1016/j.gmod.2025.101274\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.</div></div>\",\"PeriodicalId\":55083,\"journal\":{\"name\":\"Graphical Models\",\"volume\":\"140 \",\"pages\":\"Article 101274\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Graphical Models\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1524070325000219\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Graphical Models","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1524070325000219","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder
This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.
期刊介绍:
Graphical Models is recognized internationally as a highly rated, top tier journal and is focused on the creation, geometric processing, animation, and visualization of graphical models and on their applications in engineering, science, culture, and entertainment. GMOD provides its readers with thoroughly reviewed and carefully selected papers that disseminate exciting innovations, that teach rigorous theoretical foundations, that propose robust and efficient solutions, or that describe ambitious systems or applications in a variety of topics.
We invite papers in five categories: research (contributions of novel theoretical or practical approaches or solutions), survey (opinionated views of the state-of-the-art and challenges in a specific topic), system (the architecture and implementation details of an innovative architecture for a complete system that supports model/animation design, acquisition, analysis, visualization?), application (description of a novel application of know techniques and evaluation of its impact), or lecture (an elegant and inspiring perspective on previously published results that clarifies them and teaches them in a new way).
GMOD offers its authors an accelerated review, feedback from experts in the field, immediate online publication of accepted papers, no restriction on color and length (when justified by the content) in the online version, and a broad promotion of published papers. A prestigious group of editors selected from among the premier international researchers in their fields oversees the review process.