ACM SIGGRAPH 2022 Conference Proceedings最新文献

筛选
英文 中文
Low-poly Mesh Generation for Building Models 低多边形网格生成的建筑模型
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530716
Xifeng Gao, Kui Wu, Zherong Pan
{"title":"Low-poly Mesh Generation for Building Models","authors":"Xifeng Gao, Kui Wu, Zherong Pan","doi":"10.1145/3528233.3530716","DOIUrl":"https://doi.org/10.1145/3528233.3530716","url":null,"abstract":"As a common practice, game modelers manually craft low-poly meshes for given 3D building models in order to achieve the ideal balance between the small element count and the visual similarity. This can take hours and involve tedious trial and error. We propose a novel and simple algorithm to automate this process by converting high-poly 3D building models into both simple and visually preserving low-poly meshes. Our algorithm has three stages: First, a watertight, self-collision-free visual hull is generated via Boolean intersecting 3D extrusions of input’s silhouettes; We then carve out notable but redundant structures from the visual hull via Boolean subtracting 3D primitives derived from parts of the input; Finally, we generate a progressively simplified low-poly mesh sequence from the carved mesh and extract the Pareto front for users to select the desired output. Each stage of our approach is guided by visual metrics, aiming to preserve the visual similarity to the input. We have tested our method on a dataset containing 100 building models with different styles, most of which are used in popular digital games. We highlight the superior robustness and quality by comparing with state-of-the-art competing techniques. Executable program for this paper is at lowpoly-modeling.github.io.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127445842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Stability-Aware Simplification of Curve Networks 曲线网络的稳定性感知简化
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530711
William Neveu, Ivan Puhachov, Bernhard Thomaszewski, Mikhail Bessmeltsev
{"title":"Stability-Aware Simplification of Curve Networks","authors":"William Neveu, Ivan Puhachov, Bernhard Thomaszewski, Mikhail Bessmeltsev","doi":"10.1145/3528233.3530711","DOIUrl":"https://doi.org/10.1145/3528233.3530711","url":null,"abstract":"Designing curve networks for fabrication requires simultaneous consideration of structural stability, cost effectiveness, and visual appeal—complex, interrelated objectives that make manual design a difficult and tedious task. We present a novel method for fabrication-aware simplification of curve networks, algorithmically selecting a stable subset of given 3D curves. While traditionally stability is measured as magnitude of deformation induced by a set of pre-defined loads, predicting applied forces for common day objects can be challenging. Instead, we directly optimize for minimal deformation under the worst-case load. Our technical contribution is a novel formulation of 3D curve network simplification for worst-case stability, leading to a mixed-integer semi-definite programming problem (MI-SDP). We show that while solving MI-SDP directly is infeasible, a physical insight suggests an efficient greedy approximation algorithm. We demonstrate the potential of our approach on a variety of curve network designs and validate its effectiveness compared to simpler alternatives using numerical experiments.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130776696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Reconstructing Translucent Objects using Differentiable Rendering 使用可微分渲染重建半透明物体
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530714
Xi Deng, Fujun Luan, B. Walter, K. Bala, Steve Marschner
{"title":"Reconstructing Translucent Objects using Differentiable Rendering","authors":"Xi Deng, Fujun Luan, B. Walter, K. Bala, Steve Marschner","doi":"10.1145/3528233.3530714","DOIUrl":"https://doi.org/10.1145/3528233.3530714","url":null,"abstract":"Inverse rendering is a powerful approach to modeling objects from photographs, and we extend previous techniques to handle translucent materials that exhibit subsurface scattering. Representing translucency using a heterogeneous bidirectional scattering-surface reflectance distribution function (BSSRDF), we extend the framework of path-space differentiable rendering to accommodate both surface and subsurface reflection. This introduces new types of paths requiring new methods for sampling moving discontinuities in material space that arise from visibility and moving geometry. We use this differentiable rendering method in an end-to-end approach that jointly recovers heterogeneous translucent materials (represented by a BSSRDF) and detailed geometry of an object (represented by a mesh) from a sparse set of measured 2D images in a coarse-to-fine framework incorporating Laplacian preconditioning for the geometry. To efficiently optimize our models in the presence of the Monte Carlo noise introduced by the BSSRDF integral, we introduce a dual-buffer method for evaluating the L2 image loss. This efficiently avoids potential bias in gradient estimation due to the correlation of estimates for image pixels and their derivatives and enables correct convergence of the optimizer even when using low sample counts in the renderer. We validate our derivatives by comparing against finite differences and demonstrate the effectiveness of our technique by comparing inverse-rendering performance with previous methods. We show superior reconstruction quality on a set of synthetic and real-world translucent objects as compared to previous methods that model only surface reflection.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131444301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
LeviPrint: Contactless Fabrication using Full Acoustic Trapping of Elongated Parts. 使用拉长部件的全声捕获的非接触式制造。
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530752
I. Ezcurdia, Rafael Morales, M. Andrade, A. Marzo
{"title":"LeviPrint: Contactless Fabrication using Full Acoustic Trapping of Elongated Parts.","authors":"I. Ezcurdia, Rafael Morales, M. Andrade, A. Marzo","doi":"10.1145/3528233.3530752","DOIUrl":"https://doi.org/10.1145/3528233.3530752","url":null,"abstract":"LeviPrint is a system for assembling objects in a contactless manner using acoustic levitation. We explore a set of optimum acoustic fields that enables full trapping in position and orientation of elongated objects such as sticks. We then evaluate the capabilities of different ultrasonic levitators to dynamically manipulate these elongated objects. The combination of novel optimization algorithms and levitators enable the manipulation of sticks, beads and droplets to fabricate complex objects. A system prototype composed of a robot arm and a levitator is tested for different fabrication processes. We highlight the reduction of cross-contamination and the capability of building on top of objects from different angles as well as inside closed spaces. We hope that this technique inspires novel fabrication techniques and that reaches fields such as microfabrication of electromechanical components or even in-vivo additive manufacturing.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125021434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Rendering Iridescent Rock Dove Neck Feathers 渲染彩虹色岩鸽颈部羽毛
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530749
Weizhen Huang, S. Merzbach, C. Callenberg, D. Stavenga, M. Hullin
{"title":"Rendering Iridescent Rock Dove Neck Feathers","authors":"Weizhen Huang, S. Merzbach, C. Callenberg, D. Stavenga, M. Hullin","doi":"10.1145/3528233.3530749","DOIUrl":"https://doi.org/10.1145/3528233.3530749","url":null,"abstract":"Bird feathers exhibit fascinating reflectance, which is governed by fiber-like structures. Unlike hair and fur, the feather geometric structures follow intricate hierarchical patterns that span many orders of magnitude in scale. At the smallest scales, fiber elements have strongly non-cylindrical cross-sections and are often complemented by regular nanostructures, causing rich structural color. Therefore, past attempts to render feathers using fiber- or texture-based appearance models missed characteristic aspects of the visual appearance. We introduce a new feather modeling and rendering framework, which abstracts the microscopic geometry and reflectance into a microfacet-like BSDF. The R, TRT and T lobes, also known from hair and fur, here account for specular reflection off the cortex, diffuse reflection off the medulla, and transmission due to barbule spacing, respectively. Our BSDF, which does not require precomputation or storage, can be efficiently importance-sampled and readily integrated into rendering pipelines that represent feather geometry down to the barb level. We verify our approach using a BSDF-capturing setup for small biological structures, as well as against calibrated photographs of rock dove neck feathers.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124736880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Deformable 3D Caricatures with Learned Shape Control 深度可变形的3D漫画与学习形状控制
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530748
Yucheol Jung, W. Jang, S. Kim, Jiaolong Yang, Xin Tong, Seungyong Lee
{"title":"Deep Deformable 3D Caricatures with Learned Shape Control","authors":"Yucheol Jung, W. Jang, S. Kim, Jiaolong Yang, Xin Tong, Seungyong Lee","doi":"10.1145/3528233.3530748","DOIUrl":"https://doi.org/10.1145/3528233.3530748","url":null,"abstract":"A 3D caricature is an exaggerated 3D depiction of a human face. The goal of this paper is to model the variations of 3D caricatures in a compact parameter space so that we can provide a useful data-driven toolkit for handling 3D caricature deformations. To achieve the goal, we propose an MLP-based framework for building a deformable surface model, which takes a latent code and produces a 3D surface. In the framework, a SIREN MLP models a function that takes a 3D position on a fixed template surface and returns a 3D displacement vector for the input position. We create variations of 3D surfaces by learning a hypernetwork that takes a latent code and produces the parameters of the MLP. Once learned, our deformable model provides a nice editing space for 3D caricatures, supporting label-based semantic editing and point-handle-based deformation, both of which produce highly exaggerated and natural 3D caricature shapes. We also demonstrate other applications of our deformable model, such as automatic 3D caricature creation. Our code and supplementary materials are available at https://github.com/ycjungSubhuman/DeepDeformable3DCaricatures.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129016992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Go Green: General Regularized Green’s Functions for Elasticity 走向绿色:弹性的一般正则化格林函数
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530726
Jiong Chen, M. Desbrun
{"title":"Go Green: General Regularized Green’s Functions for Elasticity","authors":"Jiong Chen, M. Desbrun","doi":"10.1145/3528233.3530726","DOIUrl":"https://doi.org/10.1145/3528233.3530726","url":null,"abstract":"The fundamental solutions (Green’s functions) of linear elasticity for an infinite and isotropic media are ubiquitous in interactive graphics applications that cannot afford the computational costs of volumetric meshing and finite-element simulation. For instance, the recent work of de Goes and James [2017] leveraged these Green’s functions to formulate sculpting tools capturing in real-time broad and physically-plausible deformations more intuitively and realistically than traditional editing brushes. In this paper, we extend this family of Green’s functions by exploiting the anisotropic behavior of general linear elastic materials, where the relationship between stress and strain in the material depends on its orientation. While this more general framework prevents the existence of analytical expressions for its fundamental solutions, we show that a finite sum of spherical harmonics can be used to decompose a Green’s function, which can be further factorized into directional, radial, and material-dependent terms. From such a decoupling, we show how to numerically derive sculpting brushes to generate anisotropic deformation and finely control their falloff profiles in real-time.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125034870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MoRF: Morphable Radiance Fields for Multiview Neural Head Modeling MoRF:多视图神经头部建模的可变形辐射场
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530753
Daoye Wang, P. Chandran, G. Zoss, D. Bradley, P. Gotardo
{"title":"MoRF: Morphable Radiance Fields for Multiview Neural Head Modeling","authors":"Daoye Wang, P. Chandran, G. Zoss, D. Bradley, P. Gotardo","doi":"10.1145/3528233.3530753","DOIUrl":"https://doi.org/10.1145/3528233.3530753","url":null,"abstract":"Recent research work has developed powerful generative models (e.g., StyleGAN2) that can synthesize complete human head images with impressive photorealism, enabling applications such as photorealistically editing real photographs. While these models can be trained on large collections of unposed images, their lack of explicit 3D knowledge makes it difficult to achieve even basic control over 3D viewpoint without unintentionally altering identity. On the other hand, recent Neural Radiance Field (NeRF) methods have already achieved multiview-consistent, photorealistic renderings but they are so far limited to a single facial identity. In this paper, we propose a new Morphable Radiance Field (MoRF) method that extends a NeRF into a generative neural model that can realistically synthesize multiview-consistent images of complete human heads, with variable and controllable identity. MoRF allows for morphing between particular identities, synthesizing arbitrary new identities, or quickly generating a NeRF from few images of a new subject, all while providing realistic and consistent rendering under novel viewpoints. We train MoRF in a supervised fashion by leveraging a high-quality database of multiview portrait images of several people, captured in studio with polarization-based separation of diffuse and specular reflection. Here, we demonstrate how MoRF is a strong new step forwards towards generative NeRFs for 3D neural head modeling.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133774138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
QuickPose: Real-time Multi-view Multi-person Pose Estimation in Crowded Scenes QuickPose:拥挤场景中实时多视图多人姿态估计
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530746
Zhize Zhou, Qing Shuai, Yize Wang, Qi Fang, Xiaopeng Ji, Fashuai Li, H. Bao, Xiaowei Zhou
{"title":"QuickPose: Real-time Multi-view Multi-person Pose Estimation in Crowded Scenes","authors":"Zhize Zhou, Qing Shuai, Yize Wang, Qi Fang, Xiaopeng Ji, Fashuai Li, H. Bao, Xiaowei Zhou","doi":"10.1145/3528233.3530746","DOIUrl":"https://doi.org/10.1145/3528233.3530746","url":null,"abstract":"This work proposes a real-time algorithm for reconstructing 3D human poses in crowded scenes from multiple calibrated views. The key challenge of this problem is to efficiently match 2D observations across multiple views. Previous methods perform multi-view matching either at the full-body level, which is sensitive to 2D pose estimation error, or at the part level, which ignores 2D constraints between different types of body parts in the same view. Instead, our approach reasons about all plausible skeleton proposals during multi-view matching, where each skeleton may consist of an arbitrary number of parts instead of being a whole body or a single part. To this end, we formulate the multi-view matching problem as mode seeking in the space of skeleton proposals and develop an efficient algorithm named QuickPose to solve the problem, which enables real-time motion capture in crowded scenes. Experiments show that the proposed algorithm achieves the state-of-the-art performance in terms of both speed and accuracy on public datasets.","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117000333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Self-Conditioned GANs for Image Editing 用于图像编辑的自条件gan
ACM SIGGRAPH 2022 Conference Proceedings Pub Date : 2022-07-27 DOI: 10.1145/3528233.3530698
Yunzhe Liu, Rinon Gal, Amit H. Bermano, Baoquan Chen, D. Cohen-Or
{"title":"Self-Conditioned GANs for Image Editing","authors":"Yunzhe Liu, Rinon Gal, Amit H. Bermano, Baoquan Chen, D. Cohen-Or","doi":"10.1145/3528233.3530698","DOIUrl":"https://doi.org/10.1145/3528233.3530698","url":null,"abstract":"Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse. The networks focus on the core of the data distribution, leaving the tails — or the edges of the distribution — behind. We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution’s core. Building on this observation, we outline a method for mitigating generative bias through a self-conditioning process, where distances in the latent-space of a pre-trained generator are used to provide initial labels for the data. By fine-tuning the generator on a re-sampled distribution drawn from these self-labeled data, we force the generator to better contend with rare semantic attributes and enable more realistic generation of these properties. We compare our models to a wide range of latent editing methods, and show that by alleviating the bias they achieve finer semantic control and better identity preservation through a wider range of transformations. Our code and models will be available at https://github.com/yzliu567/sc-gan","PeriodicalId":293380,"journal":{"name":"ACM SIGGRAPH 2022 Conference Proceedings","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117086348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信