Computer Graphics Forum最新文献

筛选
英文 中文
VRTree: Example-Based 3D Interactive Tree Modeling in Virtual Reality VRTree:虚拟现实中基于实例的 3D 交互式树建模
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15254
Di Wu, Mingxin Yang, Zhihao Liu, Fangyuan Tu, Fang Liu, Zhanglin Cheng
{"title":"VRTree: Example-Based 3D Interactive Tree Modeling in Virtual Reality","authors":"Di Wu,&nbsp;Mingxin Yang,&nbsp;Zhihao Liu,&nbsp;Fangyuan Tu,&nbsp;Fang Liu,&nbsp;Zhanglin Cheng","doi":"10.1111/cgf.15254","DOIUrl":"https://doi.org/10.1111/cgf.15254","url":null,"abstract":"<p>We present VRTree, an example-based interactive virtual reality (VR) system designed to efficiently create diverse 3D tree models while faithfully preserving botanical characteristics of real-world references. Our method employs a novel representation called Hierarchical Branch Lobe (HBL), which captures the hierarchical features of trees and serves as a versatile intermediary for intuitive VR interaction. The HBL representation decomposes a 3D tree into a series of concise examples, each consisting of a small set of main branches, secondary branches, and lobe-bounded twigs. The core of our system involves two key components: (1) We design an automatic algorithm to extract an initial library of HBL examples from real tree point clouds. These HBL examples can be optionally refined according to user intentions through an interactive editing process. (2) Users can interact with the extracted HBL examples to assemble new tree structures, ensuring the local features align with the target tree species. A shape-guided procedural growth algorithm then transforms these assembled HBL structures into highly realistic, finegrained 3D tree models. Extensive experiments and user studies demonstrate that VRTree outperforms current state-of-the-art approaches, offering a highly effective and easy-to-use VR tool for tree modeling.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inverse Garment and Pattern Modeling with a Differentiable Simulator 利用可微分模拟器进行服装和图案逆向建模
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-07 DOI: 10.1111/cgf.15249
Boyang Yu, Frederic Cordier, Hyewon Seo
{"title":"Inverse Garment and Pattern Modeling with a Differentiable Simulator","authors":"Boyang Yu,&nbsp;Frederic Cordier,&nbsp;Hyewon Seo","doi":"10.1111/cgf.15249","DOIUrl":"https://doi.org/10.1111/cgf.15249","url":null,"abstract":"<div>\u0000 \u0000 <p>The capability to generate simulation-ready garment models from 3D shapes of clothed people will significantly enhance the interpretability of captured geometry of real garments, as well as their faithful reproduction in the digital world. This will have notable impact on fields like shape capture in social VR, and virtual try-on in the fashion industry. To align with the garment modeling process standardized by the fashion industry and cloth simulation software, it is required to recover 2D patterns, which are then placed around the wearer's body model and seamed prior to the draping simulation. This involves an inverse garment design problem, which is the focus of our work here: Starting with an arbitrary target garment geometry, our system estimates its animatable replica along with its corresponding 2D pattern. Built upon a differentiable cloth simulator, it runs an optimization process that is directed towards minimizing the deviation of the simulated garment shape from the target geometry, while maintaining desirable properties such as left-to-right symmetry. Experimental results on various real-world and synthetic data show that our method outperforms state-of-the-art methods in producing both high-quality garment models and accurate 2D patterns.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15249","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Light Distribution Models for Tree Growth Simulation
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-05 DOI: 10.1111/cgf.15268
Tristan Nauber, Patrick Mäder
{"title":"Light Distribution Models for Tree Growth Simulation","authors":"Tristan Nauber,&nbsp;Patrick Mäder","doi":"10.1111/cgf.15268","DOIUrl":"https://doi.org/10.1111/cgf.15268","url":null,"abstract":"<p>The simulation and modelling of tree growth is a complex subject with a long history and an important area of research in both computer graphics and botany. For more than 50 years, new approaches to this topic have been presented frequently, including several aspects to increase realism. To further improve these achievements, we present a compact and robust functional-structural plant model (FSPM) that is consistent with botanical rules. While we show several extensions to typical approaches, we focus mainly on the distribution of light as a resource in three-dimensional space. We therefore present four different light distribution models based on ray tracing, space colonization, voxel-based approaches and bounding volumes. By simulating individual light sources, we were able to create a more specified scene setup for plant simulation than it has been presented in the past. By taking into account such a more accurate distribution of light in the environment, this technique is capable of modelling realistic and diverse tree models.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15268","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi-Source Event Sequences
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-05 DOI: 10.1111/cgf.15267
Shaobin Xu, Minghui Sun
{"title":"Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi-Source Event Sequences","authors":"Shaobin Xu,&nbsp;Minghui Sun","doi":"10.1111/cgf.15267","DOIUrl":"https://doi.org/10.1111/cgf.15267","url":null,"abstract":"<p>Uncovering causal relations from event sequences to guide decision-making has become an essential task across various domains. Unfortunately, this task remains a challenge because real-world event sequences are usually collected from multiple sources. Most existing works are specifically designed for homogeneous causal analysis between events from a single source, without considering cross-source causality. In this work, we propose a heterogeneous causal analysis algorithm to detect the heterogeneous causal network between high-level events in multi-source event sequences while preserving the causal semantic relationships between diverse data sources. Additionally, the flexibility of our algorithm allows to incorporate high-level event similarity into learning model and provides a fuzzy modification mechanism. Based on the algorithm, we further propose a visual analytics framework that supports interpreting the causal network at three granularities and offers a multi-granularity modification mechanism to incorporate user feedback efficiently. We evaluate the accuracy of our algorithm through an experimental study, illustrate the usefulness of our system through a case study, and demonstrate the efficiency of our modification mechanisms through a user study.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization GSEditPro:利用基于注意力的渐进定位进行三维高斯拼接编辑
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-04 DOI: 10.1111/cgf.15215
Y. Sun, R. Tian, X. Han, X. Liu, Y. Zhang, K. Xu
{"title":"GSEditPro: 3D Gaussian Splatting Editing with Attention-based Progressive Localization","authors":"Y. Sun,&nbsp;R. Tian,&nbsp;X. Han,&nbsp;X. Liu,&nbsp;Y. Zhang,&nbsp;K. Xu","doi":"10.1111/cgf.15215","DOIUrl":"https://doi.org/10.1111/cgf.15215","url":null,"abstract":"<p>With the emergence of large-scale Text-to-Image(T2I) models and implicit 3D representations like Neural Radiance Fields (NeRF), many text-driven generative editing methods based on NeRF have appeared. However, the implicit encoding of geometric and textural information poses challenges in accurately locating and controlling objects during editing. Recently, significant advancements have been made in the editing methods of 3D Gaussian Splatting, a real-time rendering technology that relies on explicit representation. However, these methods still suffer from issues including inaccurate localization and limited manipulation over editing. To tackle these challenges, we propose GSEditPro, a novel 3D scene editing framework which allows users to perform various creative and precise editing using text prompts only. Leveraging the explicit nature of the 3D Gaussian distribution, we introduce an attention-based progressive localization module to add semantic labels to each Gaussian during rendering. This enables precise localization on editing areas by classifying Gaussians based on their relevance to the editing prompts derived from cross-attention layers of the T2I model. Furthermore, we present an innovative editing optimization method based on 3D Gaussian Splatting, obtaining stable and refined editing results through the guidance of Score Distillation Sampling and pseudo ground truth. We prove the efficacy of our method through extensive experiments.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Spectral Manifold Wavelet Regularizer for Unsupervised Deep Functional Maps 用于无监督深度函数图谱的多尺度光谱频谱小波规整器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-04 DOI: 10.1111/cgf.15230
Shengjun Liu, Jing Meng, Ling Hu, Yueyu Guo, Xinru Liu, Xiaoxia Yang, Haibo Wang, Qinsong Li
{"title":"Multiscale Spectral Manifold Wavelet Regularizer for Unsupervised Deep Functional Maps","authors":"Shengjun Liu,&nbsp;Jing Meng,&nbsp;Ling Hu,&nbsp;Yueyu Guo,&nbsp;Xinru Liu,&nbsp;Xiaoxia Yang,&nbsp;Haibo Wang,&nbsp;Qinsong Li","doi":"10.1111/cgf.15230","DOIUrl":"https://doi.org/10.1111/cgf.15230","url":null,"abstract":"<p>In deep functional maps, the regularizer computing the functional map is especially crucial for ensuring the global consistency of the computed pointwise map. As the regularizers integrated into deep learning should be differentiable, it is not trivial to incorporate informative axiomatic structural constraints into the deep functional map, such as the orientation-preserving term. Although commonly used regularizers include the Laplacian-commutativity term and the resolvent Laplacian commutativity term, these are limited to single-scale analysis for capturing geometric information. To this end, we propose a novel and theoretically well-justified regularizer commuting the functional map with the multiscale spectral manifold wavelet operator. This regularizer enhances the isometric constraints of the functional map and is conducive to providing it with better structural properties with multiscale analysis. Furthermore, we design an unsupervised deep functional map with the regularizer in a fully differentiable way. The quantitative and qualitative comparisons with several existing techniques on the (near-)isometric and non-isometric datasets show our method's superior accuracy and generalization capabilities. Additionally, we illustrate that our regularizer can be easily inserted into other functional map methods and improve their accuracy.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distinguishing Structures from Textures by Patch-based Contrasts around Pixels for High-quality and Efficient Texture filtering 通过像素周围基于斑块的对比度从纹理中区分结构,实现高质量、高效率的纹理过滤
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-11-04 DOI: 10.1111/cgf.15212
Shengchun Wang, Panpan Xu, Fei Hou, Wencheng Wang, Chong Zhao
{"title":"Distinguishing Structures from Textures by Patch-based Contrasts around Pixels for High-quality and Efficient Texture filtering","authors":"Shengchun Wang,&nbsp;Panpan Xu,&nbsp;Fei Hou,&nbsp;Wencheng Wang,&nbsp;Chong Zhao","doi":"10.1111/cgf.15212","DOIUrl":"https://doi.org/10.1111/cgf.15212","url":null,"abstract":"<p>It is still challenging with existing methods to distinguish structures from texture details, and so preventing texture filtering. Considering that the textures on both sides of a structural edge always differ much from each other in appearances, we determine whether a pixel is on a structure edge by exploiting the appearance contrast between patches around the pixel, and further propose an efficient implementation method. We demonstrate that our proposed method is more effective than existing methods to distinguish structures from texture details, and our required patches for texture measurement can be smaller than the used patches in existing methods by at least half. Thus, we can improve texture filtering on both quality and efficiency, as shown by the experimental results, e.g., we can handle the textured images with a resolution of 800 × 600 pixels in real-time. (The code is available at https://github.com/hefengxiyulu/MLPC)</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ray Tracing Animated Displaced Micro-Meshes 光线追踪动画位移微切口
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15225
Holger Gruen, Carsten Benthin, Andrew Kensler, Joshua Barczak, David McAllister
{"title":"Ray Tracing Animated Displaced Micro-Meshes","authors":"Holger Gruen,&nbsp;Carsten Benthin,&nbsp;Andrew Kensler,&nbsp;Joshua Barczak,&nbsp;David McAllister","doi":"10.1111/cgf.15225","DOIUrl":"https://doi.org/10.1111/cgf.15225","url":null,"abstract":"<p>We present a new method that allows efficient ray tracing of virtually artefact-free animated displaced micro-meshes (DMMs) [MMT23] and preserves their low memory footprint and low BVH build and update cost. DMMs allow for compact representation of micro-triangle geometry through hierarchical encoding of displacements. Displacements are computed with respect to a coarse base mesh and are used to displace new vertices introduced during <i>1 : 4</i> subdivision of the base mesh. Applying non-rigid transformation to the base mesh can result in silhouette and normal artefacts (see Figure 1) during animation. We propose an approach which prevents these artefacts by interpolating transformation matrices before applying them to the DMM representation. Our interpolation-based algorithm does not change DMM data structures and it allows for efficient bounding of animated micro-triangle geometry which is essential for fast tessellation-free ray tracing of animated DMMs.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anisotropic Specular Image-Based Lighting Based on BRDF Major Axis Sampling 基于 BRDF 主轴采样的各向异性镜面图像照明
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15233
Giovanni Cocco, Cédric Zanni, Xavier Chermain
{"title":"Anisotropic Specular Image-Based Lighting Based on BRDF Major Axis Sampling","authors":"Giovanni Cocco,&nbsp;Cédric Zanni,&nbsp;Xavier Chermain","doi":"10.1111/cgf.15233","DOIUrl":"https://doi.org/10.1111/cgf.15233","url":null,"abstract":"<p>Anisotropic specular appearances are ubiquitous in the environment: brushed stainless steel pans, kettles, elevator walls, fur, or scratched plastics. Real-time rendering of these materials with image-based lighting is challenging due to the complex shape of the bidirectional reflectance distribution function (BRDF). We propose an anisotropic specular image-based lighting method that can serve as a drop-in replacement for the standard bent normal technique [Rev11]. Our method yields more realistic results with a 50% increase in computation time of the previous technique, using the same high dynamic range (HDR) preintegrated environment image. We use several environment samples positioned along the major axis of the specular microfacet BRDF. We derive an analytic formula to determine the two closest and two farthest points from the reflected direction on an approximation of the BRDF confidence region boundary. The two farthest points define the BRDF major axis, while the two closest points are used to approximate the BRDF width. The environment level of detail is derived from the BRDF width and the distance between the samples. We extensively compare our method with the bent normal technique and the ground truth using the GGX specular BRDF.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142665194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-30 DOI: 10.1111/cgf.15266
E. Hoque, M. Saidul Islam
{"title":"Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions","authors":"E. Hoque,&nbsp;M. Saidul Islam","doi":"10.1111/cgf.15266","DOIUrl":"https://doi.org/10.1111/cgf.15266","url":null,"abstract":"<p>Natural language and visualization are two complementary modalities of human communication that play a crucial role in conveying information effectively. While visualizations help people discover trends, patterns and anomalies in data, natural language descriptions help explain these insights. Thus, combining text with visualizations is a prevalent technique for effectively delivering the core message of the data. Given the rise of natural language generation (NLG), there is a growing interest in automatically creating natural language descriptions for visualizations, which can be used as chart captions, answering questions about charts or telling data-driven stories. In this survey, we systematically review the state of the art on NLG for visualizations and introduce a taxonomy of the problem. The NLG tasks fall within the domain of natural language interfaces (NLIs) for visualization, an area that has garnered significant attention from both the research community and industry. To narrow down the scope of the survey, we primarily concentrate on the research works that focus on text generation for visualizations. To characterize the NLG problem and the design space of proposed solutions, we pose five Wh-questions, why and how NLG tasks are performed for visualizations, what the task inputs and outputs are, as well as where and when the generated texts are integrated with visualizations. We categorize the solutions used in the surveyed papers based on these ‘five Wh-questions’. Finally, we discuss the key challenges and potential avenues for future research in this domain.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15266","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信