ACM Transactions on Graphics (TOG)最新文献

筛选
英文 中文
Capturing Animation-Ready Isotropic Materials Using Systematic Poking 利用系统戳穿技术捕捉动画准备就绪的各向同性材料
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618406
Huanyu Chen, Danyong Zhao, J. Barbič
{"title":"Capturing Animation-Ready Isotropic Materials Using Systematic Poking","authors":"Huanyu Chen, Danyong Zhao, J. Barbič","doi":"10.1145/3618406","DOIUrl":"https://doi.org/10.1145/3618406","url":null,"abstract":"Capturing material properties of real-world elastic solids is both challenging and highly relevant to many applications in computer graphics, robotics and related fields. We give a non-intrusive, in-situ and inexpensive approach to measure the nonlinear elastic energy density function of man-made materials and biological tissues. We poke the elastic object with 3d-printed rigid cylinders of known radii, and use a precision force meter to record the contact force as a function of the indentation depth, which we measure using a force meter stand, or a novel unconstrained laser setup. We model the 3D elastic solid using the Finite Element Method (FEM), and elastic energy using a compressible Valanis-Landel material that generalizes Neo-Hookean materials by permitting arbitrary tensile behavior under large deformations. We then use optimization to fit the nonlinear isotropic elastic energy so that the FEM contact forces and indentations match their measured real-world counterparts. Because we use carefully designed cubic splines, our materials are accurate in a large range of stretches and robust to inversions, and are therefore \"animation-ready\" for computer graphics applications. We demonstrate how to exploit radial symmetry to convert the 3D elastostatic contact problem to the mathematically equivalent 2D problem, which vastly accelerates optimization. We also greatly improve the theory and robustness of stretch-based elastic materials, by giving a simple and elegant formula to compute the tangent stiffness matrix, with rigorous proofs and singularity handling. We also contribute the observation that volume compressibility can be estimated by poking with rigid cylinders of different radii, which avoids optical cameras and greatly simplifies experiments. We validate our method by performing full 3D simulations using the optimized materials and confirming that they match real-world forces, indentations and real deformed 3D shapes. We also validate it using a \"Shore 00\" durometer, a standard device for measuring material hardness.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Simulation on Neural Flow Maps 神经流动图上的流体模拟
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618392
Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, Bo Zhu
{"title":"Fluid Simulation on Neural Flow Maps","authors":"Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, Bo Zhu","doi":"10.1145/3618392","DOIUrl":"https://doi.org/10.1145/3618392","url":null,"abstract":"We introduce Neural Flow Maps, a novel simulation method bridging the emerging paradigm of implicit neural representations with fluid simulation based on the theory of flow maps, to achieve state-of-the-art simulation of in-viscid fluid phenomena. We devise a novel hybrid neural field representation, Spatially Sparse Neural Fields (SSNF), which fuses small neural networks with a pyramid of overlapping, multi-resolution, and spatially sparse grids, to compactly represent long-term spatiotemporal velocity fields at high accuracy. With this neural velocity buffer in hand, we compute long-term, bidirectional flow maps and their Jacobians in a mechanistically symmetric manner, to facilitate drastic accuracy improvement over existing solutions. These long-range, bidirectional flow maps enable high advection accuracy with low dissipation, which in turn facilitates high-fidelity incompressible flow simulations that manifest intricate vortical structures. We demonstrate the efficacy of our neural fluid simulation in a variety of challenging simulation scenarios, including leapfrogging vortices, colliding vortices, vortex reconnections, as well as vortex generation from moving obstacles and density differences. Our examples show increased performance over existing methods in terms of energy conservation, visual complexity, adherence to experimental observations, and preservation of detailed vortical structures.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Unified λ-subdivision Scheme for Quadrilateral Meshes with Optimal Curvature Performance in Extraordinary Regions 非凡区域曲率性能最佳的四边形网格统一 λ 细分方案
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618400
Weiyin Ma, Xu Wang, Yue Ma
{"title":"An Unified λ-subdivision Scheme for Quadrilateral Meshes with Optimal Curvature Performance in Extraordinary Regions","authors":"Weiyin Ma, Xu Wang, Yue Ma","doi":"10.1145/3618400","DOIUrl":"https://doi.org/10.1145/3618400","url":null,"abstract":"We propose an unified λ-subdivision scheme with a continuous family of tuned subdivisions for quadrilateral meshes. Main subdivision stencil parameters of the unified scheme are represented as spline functions of the subdominant eigenvalue λ of respective subdivision matrices and the λ value can be selected within a wide range to produce desired properties of refined meshes and limit surfaces with optimal curvature performance in extraordinary regions. Spline representations of stencil parameters are constructed based on discrete optimized stencil coefficients obtained by a general tuning framework that optimizes eigenvectors of subdivision matrices towards curvature continuity conditions. To further improve the quality of limit surfaces, a weighting function is devised to penalize sign changes of Gauss curvatures on respective second order characteristic maps. By selecting an appropriate λ, the resulting unified subdivision scheme produces anticipated properties towards different target applications, including nice properties of several other existing tuned subdivision schemes. Comparison results also validate the advantage of the proposed scheme with higher quality surfaces for subdivision at lower λ values, a challenging task for other related tuned subdivision schemes.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene-Aware Activity Program Generation with Language Guidance 通过语言引导生成场景感知活动程序
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618338
Zejia Su, Qingnan Fan, Xuelin Chen, Oliver van Kaick, Hui Huang, Ruizhen Hu
{"title":"Scene-Aware Activity Program Generation with Language Guidance","authors":"Zejia Su, Qingnan Fan, Xuelin Chen, Oliver van Kaick, Hui Huang, Ruizhen Hu","doi":"10.1145/3618338","DOIUrl":"https://doi.org/10.1145/3618338","url":null,"abstract":"We address the problem of scene-aware activity program generation, which requires decomposing a given activity task into instructions that can be sequentially performed within a target scene to complete the activity. While existing methods have shown the ability to generate rational or executable programs, generating programs with both high rationality and executability still remains a challenge. Hence, we propose a novel method where the key idea is to explicitly combine the language rationality of a powerful language model with dynamic perception of the target scene where instructions are executed, to generate programs with high rationality and executability. Our method iteratively generates instructions for the activity program. Specifically, a two-branch feature encoder operates on a language-based and graph-based representation of the current generation progress to extract language features and scene graph features, respectively. These features are then used by a predictor to generate the next instruction in the program. Subsequently, another module performs the predicted action and updates the scene for perception in the next iteration. Extensive evaluations are conducted on the VirtualHome-Env dataset, showing the advantages of our method over previous work. Key algorithmic designs are validated through ablation studies, and results on other types of inputs are also presented to show the generalizability of our method.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138603213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commonsense Knowledge-Driven Joint Reasoning Approach for Object Retrieval in Virtual Reality 虚拟现实中物体检索的常识知识驱动联合推理方法
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618320
Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Le Luo, Zhenliang Zhang
{"title":"Commonsense Knowledge-Driven Joint Reasoning Approach for Object Retrieval in Virtual Reality","authors":"Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Le Luo, Zhenliang Zhang","doi":"10.1145/3618320","DOIUrl":"https://doi.org/10.1145/3618320","url":null,"abstract":"National Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence (BIGAI), China Retrieving out-of-reach objects is a crucial task in virtual reality (VR). One of the most commonly used approaches for this task is the gesture-based approach, which allows for bare-hand, eyes-free, and direct retrieval. However, previous work has primarily focused on assigned gesture design, neglecting the context. This can make it challenging to accurately retrieve an object from a large number of objects due to the one-to-one mapping metaphor, limitations of finger poses, and memory burdens. There is a general consensus that objects and contexts are related, which suggests that the object expected to be retrieved is related to the context, including the scene and the objects with which users interact. As such, we propose a commonsense knowledge-driven joint reasoning approach for object retrieval, where human grasping gestures and context are modeled using an And-Or graph (AOG). This approach enables users to accurately retrieve objects from a large number of candidate objects by using natural grasping gestures based on their experience of grasping physical objects. Experimental results demonstrate that our proposed approach improves retrieval accuracy. We also propose an object retrieval system based on the proposed approach. Two user studies show that our system enables efficient object retrieval in virtual environments (VEs).","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Cone Singularity Construction for Conformal Parameterizations 共形参数化的高效圆锥奇点构造
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618407
Mo Li, Qing Fang, Zheng Zhang, Ligang Liu, Xiao-Ming Fu
{"title":"Efficient Cone Singularity Construction for Conformal Parameterizations","authors":"Mo Li, Qing Fang, Zheng Zhang, Ligang Liu, Xiao-Ming Fu","doi":"10.1145/3618407","DOIUrl":"https://doi.org/10.1145/3618407","url":null,"abstract":"We propose an efficient method to construct sparse cone singularities under distortion-bounded constraints for conformal parameterizations. Central to our algorithm is using the technique of shape derivatives to move cones for distortion reduction without changing the number of cones. In particular, the supernodal sparse Cholesky update significantly accelerates this movement process. To satisfy the distortion-bounded constraint, we alternately move cones and add cones. The capability and feasibility of our approach are demonstrated over a data set containing 3885 models. Compared with the state-of-the-art method, we achieve an average acceleration of 15 times and slightly fewer cones for the same amount of distortion.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Amortizing Samples in Physics-Based Inverse Rendering Using ReSTIR 使用 ReSTIR 在基于物理的反渲染中摊销样本
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618331
YU-CHEN Wang, Chris Wyman, Lifan Wu, Shuang Zhao
{"title":"Amortizing Samples in Physics-Based Inverse Rendering Using ReSTIR","authors":"YU-CHEN Wang, Chris Wyman, Lifan Wu, Shuang Zhao","doi":"10.1145/3618331","DOIUrl":"https://doi.org/10.1145/3618331","url":null,"abstract":"Recently, great progress has been made in physics-based differentiable rendering. Existing differentiable rendering techniques typically focus on static scenes, but during inverse rendering---a key application for differentiable rendering---the scene is updated dynamically by each gradient step. In this paper, we take a first step to leverage temporal data in the context of inverse direct illumination. By adopting reservoir-based spatiotemporal resampled importance resampling (ReSTIR), we introduce new Monte Carlo estimators for both interior and boundary components of differential direct illumination integrals. We also integrate ReSTIR with antithetic sampling to further improve its effectiveness. At equal frame time, our methods produce gradient estimates with up to 100× lower relative error than baseline methods. Additionally, we propose an inverse-rendering pipeline that incorporates these estimators and provides reconstructions with up to 20× lower error.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138601711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentiable Rendering of Parametric Geometry 参数几何的可微分渲染
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618387
Markus Worchel, Marc Alexa
{"title":"Differentiable Rendering of Parametric Geometry","authors":"Markus Worchel, Marc Alexa","doi":"10.1145/3618387","DOIUrl":"https://doi.org/10.1145/3618387","url":null,"abstract":"We propose an efficient method for differentiable rendering of parametric surfaces and curves, which enables their use in inverse graphics problems. Our central observation is that a representative triangle mesh can be extracted from a continuous parametric object in a differentiable and efficient way. We derive differentiable meshing operators for surfaces and curves that provide varying levels of approximation granularity. With triangle mesh approximations, we can readily leverage existing machinery for differentiable mesh rendering to handle parametric geometry. Naively combining differentiable tessellation with inverse graphics settings lacks robustness and is prone to reaching undesirable local minima. To this end, we draw a connection between our setting and the optimization of triangle meshes in inverse graphics and present a set of optimization techniques, including regularizations and coarse-to-fine schemes. We show the viability and efficiency of our method in a set of image-based computer-aided design applications.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenSVBRDF: A Database of Measured Spatially-Varying Reflectance OpenSVBRDF:空间变化反射率测量数据库
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618358
Xiaohe Ma, Xianmin Xu, Leyao Zhang, Kun Zhou, Hongzhi Wu
{"title":"OpenSVBRDF: A Database of Measured Spatially-Varying Reflectance","authors":"Xiaohe Ma, Xianmin Xu, Leyao Zhang, Kun Zhou, Hongzhi Wu","doi":"10.1145/3618358","DOIUrl":"https://doi.org/10.1145/3618358","url":null,"abstract":"We present the first large-scale database of measured spatially-varying anisotropic reflectance, consisting of 1,000 high-quality near-planar SVBRDFs, spanning 9 material categories such as wood, fabric and metal. Each sample is captured in 15 minutes, and represented as a set of high-resolution texture maps that correspond to spatially-varying BRDF parameters and local frames. To build this database, we develop a novel integrated system for robust, high-quality and -efficiency reflectance acquisition and reconstruction. Our setup consists of 2 cameras and 16,384 LEDs. We train 64 lighting patterns for efficient acquisition, in conjunction with a network that predicts per-point reflectance in a neural representation from carefully aligned two-view measurements captured under the patterns. The intermediate results are further fine-tuned with respect to the photographs acquired under 63 effective linear lights, and finally fitted to a BRDF model. We report various statistics of the database, and demonstrate its value in the applications of material generation, classification as well as sampling. All related data, including future additions to the database, can be downloaded from https://opensvbrdf.github.io/.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Design of Flexible Planar Microstructures 柔性平面微结构的计算设计
ACM Transactions on Graphics (TOG) Pub Date : 2023-12-04 DOI: 10.1145/3618396
Zhan Zhang, Christopher Brandt, Jean Jouve, Yue Wang, Tian Chen, Mark Pauly, Julian Panetta
{"title":"Computational Design of Flexible Planar Microstructures","authors":"Zhan Zhang, Christopher Brandt, Jean Jouve, Yue Wang, Tian Chen, Mark Pauly, Julian Panetta","doi":"10.1145/3618396","DOIUrl":"https://doi.org/10.1145/3618396","url":null,"abstract":"Mechanical metamaterials enable customizing the elastic properties of physical objects by altering their fine-scale structure. A broad gamut of effective material properties can be produced even from a single fabrication material by optimizing the geometry of a periodic microstructure tiling. Past work has extensively studied the capabilities of microstructures in the small-displacement regime, where periodic homogenization of linear elasticity yields computationally efficient optimal design algorithms. However, many applications involve flexible structures undergoing large deformations for which the accuracy of linear elasticity rapidly deteriorates due to geometric nonlinearities. Design of microstructures at finite strains involves a massive increase in computation and is much less explored; no computational tool yet exists to design metamaterials emulating target hyperelastic laws over finite regions of strain space. We make an initial step in this direction, developing algorithms to accelerate homogenization and metamaterial design for nonlinear elasticity and building a complete framework for the optimal design of planar metamaterials. Our nonlinear homogenization method works by efficiently constructing an accurate interpolant of a microstructure's deformation over a finite space of macroscopic strains likely to be endured by the metamaterial. From this interpolant, the homogenized energy density, stress, and tangent elasticity tensor describing the microstructure's effective properties can be inexpensively computed at any strain. Our design tool then fits the effective material properties to a target constitutive law over a region of strain space using a parametric shape optimization approach, producing a directly manufacturable geometry. We systematically test our framework by designing a catalog of materials fitting isotropic Hooke's laws as closely as possible. We demonstrate significantly improved accuracy over traditional linear metamaterial design techniques by fabricating and testing physical prototypes.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138603425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信