DECOLLAGE:通过可控、定位和学习的几何增强技术实现 3D 细节化

Qimin Chen, Zhiqin Chen, Vladimir G. Kim, Noam Aigerman, Hao Zhang, Siddhartha Chaudhuri
{"title":"DECOLLAGE:通过可控、定位和学习的几何增强技术实现 3D 细节化","authors":"Qimin Chen, Zhiqin Chen, Vladimir G. Kim, Noam Aigerman, Hao Zhang, Siddhartha Chaudhuri","doi":"arxiv-2409.06129","DOIUrl":null,"url":null,"abstract":"We present a 3D modeling method which enables end-users to refine or\ndetailize 3D shapes using machine learning, expanding the capabilities of\nAI-assisted 3D content creation. Given a coarse voxel shape (e.g., one produced\nwith a simple box extrusion tool or via generative modeling), a user can\ndirectly \"paint\" desired target styles representing compelling geometric\ndetails, from input exemplar shapes, over different regions of the coarse\nshape. These regions are then up-sampled into high-resolution geometries which\nadhere with the painted styles. To achieve such controllable and localized 3D\ndetailization, we build on top of a Pyramid GAN by making it masking-aware. We\ndevise novel structural losses and priors to ensure that our method preserves\nboth desired coarse structures and fine-grained features even if the painted\nstyles are borrowed from diverse sources, e.g., different semantic parts and\neven different shape categories. Through extensive experiments, we show that\nour ability to localize details enables novel interactive creative workflows\nand applications. Our experiments further demonstrate that in comparison to\nprior techniques built on global detailization, our method generates\nstructure-preserving, high-resolution stylized geometries with more coherent\nshape details and style transitions.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement\",\"authors\":\"Qimin Chen, Zhiqin Chen, Vladimir G. Kim, Noam Aigerman, Hao Zhang, Siddhartha Chaudhuri\",\"doi\":\"arxiv-2409.06129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a 3D modeling method which enables end-users to refine or\\ndetailize 3D shapes using machine learning, expanding the capabilities of\\nAI-assisted 3D content creation. Given a coarse voxel shape (e.g., one produced\\nwith a simple box extrusion tool or via generative modeling), a user can\\ndirectly \\\"paint\\\" desired target styles representing compelling geometric\\ndetails, from input exemplar shapes, over different regions of the coarse\\nshape. These regions are then up-sampled into high-resolution geometries which\\nadhere with the painted styles. To achieve such controllable and localized 3D\\ndetailization, we build on top of a Pyramid GAN by making it masking-aware. We\\ndevise novel structural losses and priors to ensure that our method preserves\\nboth desired coarse structures and fine-grained features even if the painted\\nstyles are borrowed from diverse sources, e.g., different semantic parts and\\neven different shape categories. Through extensive experiments, we show that\\nour ability to localize details enables novel interactive creative workflows\\nand applications. Our experiments further demonstrate that in comparison to\\nprior techniques built on global detailization, our method generates\\nstructure-preserving, high-resolution stylized geometries with more coherent\\nshape details and style transitions.\",\"PeriodicalId\":501174,\"journal\":{\"name\":\"arXiv - CS - Graphics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06129\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们介绍了一种三维建模方法,它能让最终用户利用机器学习对三维形状进行细化,从而扩展人工智能辅助三维内容创建的能力。给定一个粗体素形状(例如,用简单的方框挤出工具或通过生成建模生成的形状),用户可以直接在粗形状的不同区域 "绘制 "所需的目标样式,这些样式来自输入的示例形状,代表了引人注目的几何细节。然后,这些区域会被上采样成高分辨率的几何图形,并与绘制的样式保持一致。为了实现这种可控的局部 3D 细节化,我们在金字塔 GAN 的基础上对其进行了遮罩感知。我们设计了新颖的结构损失和先验,以确保我们的方法既能保留所需的粗略结构,又能保留细粒度特征,即使绘制的样式来自不同的来源,例如不同的语义部分,甚至不同的形状类别。通过大量实验,我们证明了我们的细节定位能力能够实现新颖的交互式创意工作流程和应用。我们的实验进一步证明,与基于全局细节化的其他技术相比,我们的方法能生成保留结构的高分辨率风格化几何图形,而且形状细节和风格过渡更加连贯。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement
We present a 3D modeling method which enables end-users to refine or detailize 3D shapes using machine learning, expanding the capabilities of AI-assisted 3D content creation. Given a coarse voxel shape (e.g., one produced with a simple box extrusion tool or via generative modeling), a user can directly "paint" desired target styles representing compelling geometric details, from input exemplar shapes, over different regions of the coarse shape. These regions are then up-sampled into high-resolution geometries which adhere with the painted styles. To achieve such controllable and localized 3D detailization, we build on top of a Pyramid GAN by making it masking-aware. We devise novel structural losses and priors to ensure that our method preserves both desired coarse structures and fine-grained features even if the painted styles are borrowed from diverse sources, e.g., different semantic parts and even different shape categories. Through extensive experiments, we show that our ability to localize details enables novel interactive creative workflows and applications. Our experiments further demonstrate that in comparison to prior techniques built on global detailization, our method generates structure-preserving, high-resolution stylized geometries with more coherent shape details and style transitions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信