Computer Graphics Forum最新文献

筛选
英文 中文
Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network 基于双输入特征融合网络的体绘制实时神经去噪
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-09-16 DOI: 10.1111/cgf.70276
Chunxiao Xu, Xinran Xu, Jiatian Zhang, Yufei Liu, Yiheng Cao, Lingxiao Zhao
{"title":"Real-time Neural Denoising for Volume Rendering Using Dual-Input Feature Fusion Network","authors":"Chunxiao Xu,&nbsp;Xinran Xu,&nbsp;Jiatian Zhang,&nbsp;Yufei Liu,&nbsp;Yiheng Cao,&nbsp;Lingxiao Zhao","doi":"10.1111/cgf.70276","DOIUrl":"https://doi.org/10.1111/cgf.70276","url":null,"abstract":"<p>Direct volume rendering (DVR) is a widely used technique in the visualisation of volumetric data. As an important DVR technique, volumetric path tracing (VPT) simulates light transport to produce realistic rendering results, which provides enhanced perception and understanding for users, especially in the field of medical imaging. VPT, based on the Monte Carlo (MC) method, typically requires a large number of samples to generate noise-free results. However, in real-time applications, only a limited number of samples per pixel is allowed and significant noise can be created. This paper introduces a novel neural denoising approach that utilises a new feature fusion method for VPT. Our method uses a feature decomposition technique that separates radiance into components according to noise levels. Our new decomposition technique mitigates biases found in the contemporary decoupling denoising algorithm and shows better utilisation of samples. A lightweight dual-input network is designed to correlate these components with noise-free ground truth. Additionally, for denoising sequences of video frames, we develop a learning-based temporal method that calculates temporal weight maps, blending reprojected results of previous frames with spatially denoised current frames. Comparative results demonstrate that our network performs faster inference than existing methods and can produce denoised output of higher quality in real time.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image Hi3DFace:从单个遮挡图像重建高逼真的3D人脸
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-09-05 DOI: 10.1111/cgf.70277
Dongjin Huang, Yongsheng Shi, Jiantao Qu, Jinhua Liu, Wen Tang
{"title":"Hi3DFace: High-Realistic 3D Face Reconstruction From a Single Occluded Image","authors":"Dongjin Huang,&nbsp;Yongsheng Shi,&nbsp;Jiantao Qu,&nbsp;Jinhua Liu,&nbsp;Wen Tang","doi":"10.1111/cgf.70277","DOIUrl":"https://doi.org/10.1111/cgf.70277","url":null,"abstract":"<p>We propose Hi3DFace, a novel framework for simultaneous de-occlusion and high-fidelity 3D face reconstruction. To address real-world occlusions, we construct a diverse facial dataset by simulating common obstructions and present TMANet, a transformer-based multi-scale attention network that effectively removes occlusions and restores clean face images. For the 3D face reconstruction stage, we propose a coarse-medium-fine self-supervised scheme. In the coarse reconstruction pipeline, we adopt a face regression network to predict 3DMM coefficients for generating a smooth 3D face. In the medium-scale reconstruction pipeline, we propose a novel depth displacement network, DDFTNet, to remove noise and restore rich details to the smooth 3D geometry. In the fine-scale reconstruction pipeline, we design a GCN (graph convolutional network) refiner to enhance the fidelity of 3D textures. Additionally, a light-aware network (LightNet) is proposed to distil lighting parameters, ensuring illumination consistency between reconstructed 3D faces and input images. Extensive experimental results demonstrate that the proposed Hi3DFace significantly outperforms state-of-the-art reconstruction methods on four public datasets, and five constructed occlusion-type datasets. Hi3DFace achieves robustness and effectiveness in removing occlusions and reconstructing 3D faces from real-world occluded facial images.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data 视频中的畜群:从宏观运动数据中学习微观畜群模型
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-09-05 DOI: 10.1111/cgf.70225
Xianjin Gong, James Gain, Damien Rohmer, Sixtine Lyonnet, Julien Pettré, Marie-Paule Cani
{"title":"Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data","authors":"Xianjin Gong,&nbsp;James Gain,&nbsp;Damien Rohmer,&nbsp;Sixtine Lyonnet,&nbsp;Julien Pettré,&nbsp;Marie-Paule Cani","doi":"10.1111/cgf.70225","DOIUrl":"https://doi.org/10.1111/cgf.70225","url":null,"abstract":"<p>We present a method for animating herds that automatically tunes a microscopic herd model based on a short video clip of real animals. Our method handles videos with dense herds, where individual animal motion cannot be separated out. Our contribution is a novel framework for extracting macroscopic herd behaviour from such video clips, and then deriving the microscopic agent parameters that best match this behaviour.</p><p>To support this learning process, we extend standard agent models to provide a separation between leaders and followers, better match the occlusion and field-of-view limitations of real animals, support differentiable parameter optimization and improve authoring control. We validate the method by showing that once optimized, the social force and perception parameters of the resulting herd model are accurate enough to predict subsequent frames in the video, even for macroscopic properties not directly incorporated in the optimization process. Furthermore, the extracted herding characteristics can be applied to any terrain with a palette and region-painting approach that generalizes to different herd sizes and leader trajectories. This enables the authoring of herd animations in new environments while preserving learned behaviour.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FRIDU: Functional Map Refinement with Guided Image Diffusion FRIDU:功能地图细化与引导图像扩散
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70203
Avigail Cohen Rimon, Mirela Ben-Chen, Or Litany
{"title":"FRIDU: Functional Map Refinement with Guided Image Diffusion","authors":"Avigail Cohen Rimon,&nbsp;Mirela Ben-Chen,&nbsp;Or Litany","doi":"10.1111/cgf.70203","DOIUrl":"https://doi.org/10.1111/cgf.70203","url":null,"abstract":"<p>We propose a novel approach for refining a given correspondence map between two shapes. A correspondence map represented as a <i>functional map</i>, namely a change of basis matrix, can be additionally treated as a 2D image. With this perspective, we train an <i>image diffusion model</i> directly in the space of functional maps, enabling it to generate accurate maps conditioned on an inaccurate initial map. The training is done purely in the functional space, and thus is highly efficient. At inference time, we use the pointwise map corresponding to the current functional map as <i>guidance</i> during the diffusion process. The guidance can additionally encourage different functional map objectives, such as orthogonality and commutativity with the Laplace-Beltrami operator. We show that our approach is competitive with state-of-the-art methods of map refinement and that guided diffusion models provide a promising pathway to functional map processing.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Atomizer: Beyond Non-Planar Slicing for Fused Filament Fabrication 雾化器:超越非平面切片用于熔丝制造
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70189
X. Chermain, G. Cocco, C. Zanni, E. Garner, P. A. Hugron, S. Lefebvre
{"title":"Atomizer: Beyond Non-Planar Slicing for Fused Filament Fabrication","authors":"X. Chermain,&nbsp;G. Cocco,&nbsp;C. Zanni,&nbsp;E. Garner,&nbsp;P. A. Hugron,&nbsp;S. Lefebvre","doi":"10.1111/cgf.70189","DOIUrl":"https://doi.org/10.1111/cgf.70189","url":null,"abstract":"<p>Fused filament fabrication (FFF) enables users to quickly design and fabricate parts with unprecedented geometric complexity, fine-tuning both the structural and aesthetic properties of each object. Nevertheless, the full potential of this technology has yet to be realized, as current slicing methods fail to fully exploit the deposition freedom offered by modern 3D printers. In this work, we introduce a novel approach to toolpath generation that moves beyond the traditional layer-based concept. We use frames, referred to as <i>atoms</i>, as solid elements instead of slices. We optimize the distribution of atoms within the part volume to ensure even spacing and smooth orientation while accurately capturing the part's geometry. Although these atoms collectively represent the complete object, they do not inherently define a fabrication plan. To address this, we compute an extrusion toolpath as an ordered sequence of atoms that, when followed, provides a collision-free fabrication strategy. This general approach is robust, requires minimal user intervention compared to existing techniques, and integrates many of the best features into a unified framework: precise deposition conforming to non-planar surfaces, effective filling of narrow features – down to a single path – and the capability to locally print vertical structures before transitioning elsewhere. Additionally, it enables entirely new capabilities, such as anisotropic appearance fabrication on curved surfaces.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controlling Quadric Error Simplification with Line Quadrics 用线二次曲线控制二次误差简化
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70184
Hsueh-Ti Derek Liu, Mehdi Rahimzadeh, Victor Zordan
{"title":"Controlling Quadric Error Simplification with Line Quadrics","authors":"Hsueh-Ti Derek Liu,&nbsp;Mehdi Rahimzadeh,&nbsp;Victor Zordan","doi":"10.1111/cgf.70184","DOIUrl":"https://doi.org/10.1111/cgf.70184","url":null,"abstract":"<p>This work presents a method to control the output of mesh simplification algorithms based on iterative edge collapses. Traditional mesh simplification focuses on preserving the visual appearance. Despite still being an important criterion, other geometric properties also play critical roles in different applications, such as triangle quality for computations. This motivates our work to stay under the umbrella of the popular quadric error mesh simplification, while proposing different ways to control the simplified mesh to possess other geometric properties. The key ingredient of our work is another quadric error, called <i>line quadrics</i>, which can be seamlessly added to the vanilla quadric error metric. We show that, theoretically and empirically, adding our line quadrics can improve the numerics and encourage the simplified mesh to have uniformly distributed vertices. If we spread the line quadric adaptively to different regions, it can easily lead to soft preservation of feature vertices and edges. Our method is simple to implement, requiring only a few lines of code change on top of the original quadric error simplification, and can lead to a variety of user controls.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uniform Sampling of Surfaces by Casting Rays 通过投射射线对表面进行均匀取样
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70202
Selena Ling, Abhishek Madan, Nicholas Sharp, Alec Jacobson
{"title":"Uniform Sampling of Surfaces by Casting Rays","authors":"Selena Ling,&nbsp;Abhishek Madan,&nbsp;Nicholas Sharp,&nbsp;Alec Jacobson","doi":"10.1111/cgf.70202","DOIUrl":"https://doi.org/10.1111/cgf.70202","url":null,"abstract":"<p>Randomly sampling points on surfaces is an essential operation in geometry processing. This sampling is computationally straightforward on explicit meshes, but it is much more difficult on other shape representations, such as widely-used implicit surfaces. This work studies a simple and general scheme for sampling points on a surface, which is derived from a connection to the intersections of random rays with the surface. Concretely, given a subroutine to cast a ray against a surface and find all intersections, we can use that subroutine to uniformly sample white noise points on the surface. This approach is particularly effective in the context of implicit signed distance functions, where sphere marching allows us to efficiently cast rays and sample points, without needing to extract an intermediate mesh. We analyze the basic method to show that it guarantees uniformity, and find experimentally that it is significantly more efficient than alternative strategies on a variety of representations. Furthermore, we show extensions to blue noise sampling and stratified sampling, and applications to deform neural implicit surfaces as well as moment estimation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70202","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GreenCloud: Volumetric Gradient Filtering via Regularized Green's Functions GreenCloud:基于正则化格林函数的体积梯度滤波
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70207
Kenji Tojo, Nobuyuki Umetani
{"title":"GreenCloud: Volumetric Gradient Filtering via Regularized Green's Functions","authors":"Kenji Tojo,&nbsp;Nobuyuki Umetani","doi":"10.1111/cgf.70207","DOIUrl":"https://doi.org/10.1111/cgf.70207","url":null,"abstract":"<p>Gradient-based optimization is a fundamental tool in geometry processing, but it is often hampered by geometric distortion arising from noisy or sparse gradients. Existing methods mitigate these issues by filtering (i.e., diffusing) gradients over a surface mesh, but they require explicit mesh connectivity and solving large linear systems, making them unsuitable for point-based representation. In this work, we introduce a gradient filtering method tailored for point-based geometry. Our method bypasses explicit connectivity by leveraging regularized Green's functions to directly compute the filtered gradient field from discrete spatial points. Additionally, our approach incorporates elastic deformation based on Green's function of linear elasticity (known as Kelvinlets), reproducing various elastic behaviors such as smoothness and volume preservation while improving robustness in affine transformations. We further accelerate computation using a hierarchical Barnes–Hut style approximation, enabling scalable optimization of one million points. Our method significantly improves convergence across a wide range of applications, including reconstruction, editing, stylization, and simplified optimization experiments with Gaussian splatting.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 前页
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70200
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.70200","DOIUrl":"https://doi.org/10.1111/cgf.70200","url":null,"abstract":"&lt;p&gt;Bilbao, Spain&lt;/p&gt;&lt;p&gt;July 2 – 4, 2025&lt;/p&gt;&lt;p&gt;&lt;b&gt;Conference Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Michael Barton, BCAM&lt;/p&gt;&lt;p&gt;Leif Kobbelt, RWTH Aachen University&lt;/p&gt;&lt;p&gt;&lt;b&gt;Technical Program Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Marco Attene, CNR&lt;/p&gt;&lt;p&gt;Silvia Sellán, Columbia University&lt;/p&gt;&lt;p&gt;&lt;b&gt;Graduate School Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Michal Bizzarri, University of West Bohemia Jing Ren, ETH Zurich&lt;/p&gt;&lt;p&gt;&lt;b&gt;Steering Committee&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Leif Kobbelt, RWTH Aachen University, DE&lt;/p&gt;&lt;p&gt;Marc Alexa, Technische Universität Berlin, DE&lt;/p&gt;&lt;p&gt;Pierre Alliez, INRIA, FR&lt;/p&gt;&lt;p&gt;Mirela Ben-Chen, Technion-IIT, IL&lt;/p&gt;&lt;p&gt;Hui Huang, Shenzhen University, CN&lt;/p&gt;&lt;p&gt;Niloy Mitra, University College London, GB&lt;/p&gt;&lt;p&gt;Daniele Panozzo, New York University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Alexa, Marc&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Berlin, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Alliez, Pierre&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Inria, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Babaei, Vahid&lt;/b&gt;&lt;/p&gt;&lt;p&gt;MPI, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Barton, Michael&lt;/b&gt;&lt;/p&gt;&lt;p&gt;BCAM, ES&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bo, Pengbo&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Harbin Institute of Technology, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bærentzen, Jakob Andreas&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Denmark, DK&lt;/p&gt;&lt;p&gt;&lt;b&gt;Belyaev, Alexander&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Heriot-Watt University, GB&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ben-Chen, Mirela&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Technion - IIT, IL&lt;/p&gt;&lt;p&gt;&lt;b&gt;Benes, Bedrich&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Purdue University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bommes, David&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Bern, CH&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bonnel, Nicolas&lt;/b&gt;&lt;/p&gt;&lt;p&gt;CNRS / University Lyon, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Botsch, Mario&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Dortmund, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Boubekeur, Tamy&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Adobe Research, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Campen, Marcel&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Universität Osnabrück, DE&lt;/p&gt;&lt;p&gt;&lt;b&gt;Castellani, Umberto&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Verona, IT&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chaine, Raphaelle&lt;/b&gt;&lt;/p&gt;&lt;p&gt;LIRIS - CNRS, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Cignoni, Paolo&lt;/b&gt;&lt;/p&gt;&lt;p&gt;ISTI - CNR, IT&lt;/p&gt;&lt;p&gt;&lt;b&gt;Cordonnier, Guillaume&lt;/b&gt;&lt;/p&gt;&lt;p&gt;INRIA, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chen, Zhonggui&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Xiamen University, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chen, Renjie&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Science and Technology, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chenxi, Liu&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Toronto, CA&lt;/p&gt;&lt;p&gt;&lt;b&gt;Chien, Edward&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Boston University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Digne, Julie&lt;/b&gt;&lt;/p&gt;&lt;p&gt;LIRIS - CNRS, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Faraj, Noura&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Université de Montpellier - LIRMM, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ferguson, Zachary&lt;/b&gt;&lt;/p&gt;&lt;p&gt;CLO Virtual Fashion, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Fu, Xiao-Ming&lt;/b&gt;&lt;/p&gt;&lt;p&gt;USTC, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Gao, Xifeng&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Tencent America, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Gingold, Yotam&lt;/b&gt;&lt;/p&gt;&lt;p&gt;George Mason University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Gillespie, Mark&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Inria, FR&lt;/p&gt;&lt;p&gt;&lt;b&gt;Giorgi, Daniela&lt;/b&gt;&lt;/p&gt;&lt;p&gt;National Research Council of Italy, IT&lt;/p&gt;&lt;p&gt;&lt;b&gt;Guerrero, Paul&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Adobe, GB&lt;/p&gt;&lt;p&gt;&lt;b&gt;Hildebrandt, Klaus&lt;/b&gt;&lt;/p&gt;&lt;p&gt;TU Delft, NL&lt;/p&gt;&lt;p&gt;&lt;b&gt;Hanocka, Rana&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Chicago, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Herholz, Philipp&lt;/b&gt;&lt;/p&gt;&lt;p&gt;ETH Zurich, CH&lt;/p&gt;&lt;p&gt;&lt;b&gt;Hormann, Kai&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Università della Svizzera italiana, CH&lt;/p&gt;&lt;p&gt;&lt;b&gt;Huang, Jin&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Zhejiang University, CN&lt;/p&gt;&lt;p&gt;&lt;b&gt;Huang, Qixing&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Texas, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Jacobson, Alec&lt;/b&gt;&lt;/p&gt;&lt;p&gt;University of Toronto, CA&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ju, Tao&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Washington University in St. Louis, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Kazhdan, Misha&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Johns Hopkins University, US&lt;/p&gt;&lt;p&gt;&lt;b&gt;Keyser, John&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Texas A&amp;M University,","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":"i-xii"},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatAIRials: Isotropic Inflatable Metamaterials for Freeform Surface Design 材料:用于自由曲面设计的各向同性充气超材料
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-28 DOI: 10.1111/cgf.70190
Siyuan He, Meng-Jan Wu, Arthur Lebée, Mélina Skouras
{"title":"MatAIRials: Isotropic Inflatable Metamaterials for Freeform Surface Design","authors":"Siyuan He,&nbsp;Meng-Jan Wu,&nbsp;Arthur Lebée,&nbsp;Mélina Skouras","doi":"10.1111/cgf.70190","DOIUrl":"https://doi.org/10.1111/cgf.70190","url":null,"abstract":"<p>Inflatable pads, such as those used as mattresses or protective equipment, are structures made of two planar membranes sealed according to periodic patterns, typically parallel lines or dots. In this work, we propose to treat these inflatables as <i>metamaterials</i>.</p><p>By considering novel sealing patterns with 6-fold symmetry, we are able to generate a family of inflatable materials whose macroscale contraction is <i>isotropic</i> and can be modulated by controlling the parameters of the seals. We leverage this property of our inflatable materials family to propose a simple and effective algorithm based on conformal mapping that allows us to design the layout of inflatable structures that can be fabricated flat and whose inflated shapes approximate those of given target freeform surfaces.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信