ACM SIGGRAPH 2005 Papers最新文献

筛选
英文 中文
Local, deformable precomputed radiance transfer 局部的,可变形的预先计算的辐射传输
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073335
Peter-Pike J. Sloan, B. Luna, John M. Snyder
{"title":"Local, deformable precomputed radiance transfer","authors":"Peter-Pike J. Sloan, B. Luna, John M. Snyder","doi":"10.1145/1186822.1073335","DOIUrl":"https://doi.org/10.1145/1186822.1073335","url":null,"abstract":"Precomputed radiance transfer (PRT) captures realistic lighting effects from distant, low-frequency environmental lighting but has been limited to static models or precomputed sequences. We focus on PRT for local effects such as bumps, wrinkles, or other detailed features, but extend it to arbitrarily deformable models. Our approach applies zonal harmonics (ZH) which approximate spherical functions as sums of circularly symmetric Legendre polynomials around different axes. By spatially varying both the axes and coefficients of these basis functions, we can fit to spatially varying transfer signals. Compared to the spherical harmonic (SH) basis, the ZH basis yields a more compact approximation. More important, it can be trivially rotated whereas SH rotation is expensive and unsuited for dense per-vertex or per-pixel evaluation. This property allows, for the first time, PRT to be mapped onto deforming models which re-orient the local coordinate frame. We generate ZH transfer models by fitting to PRT signals simulated on meshes or simple parametric models for thin membranes and wrinkles. We show how shading with ZH transfer can be significantly accelerated by specializing to a given lighting environment. Finally, we demonstrate real-time rendering results with soft shadows, inter-reflections, and subsurface scatter on deforming models.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124973488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 165
Animating pictures with stochastic motion textures 用随机运动纹理动画图片
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073273
Yung-Yu Chuang, Dan B. Goldman, K. Zheng, B. Curless, D. Salesin, R. Szeliski
{"title":"Animating pictures with stochastic motion textures","authors":"Yung-Yu Chuang, Dan B. Goldman, K. Zheng, B. Curless, D. Salesin, R. Szeliski","doi":"10.1145/1186822.1073273","DOIUrl":"https://doi.org/10.1145/1186822.1073273","url":null,"abstract":"In this paper, we explore the problem of enhancing still pictures with subtly animated motions. We limit our domain to scenes containing passive elements that respond to natural forces in some fashion. We use a semi-automatic approach, in which a human user segments the scene into a series of layers to be individually animated. Then, a \"stochastic motion texture\" is automatically synthesized using a spectral method, i.e., the inverse Fourier transform of a filtered noise spectrum. The motion texture is a time-varying 2D displacement map, which is applied to each layer. The resulting warped layers are then recomposited to form the animated frames. The result is a looping video texture created from a single still image, which has the advantages of being more controllable and of generally higher image quality and resolution than a video texture created from a video source. We demonstrate the technique on a variety of photographs and paintings.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123448670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 154
Real-Time subspace integration for St. Venant-Kirchhoff deformable models St. Venant-Kirchhoff变形模型的实时子空间集成
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073300
J. Barbič, Doug L. James
{"title":"Real-Time subspace integration for St. Venant-Kirchhoff deformable models","authors":"J. Barbič, Doug L. James","doi":"10.1145/1186822.1073300","DOIUrl":"https://doi.org/10.1145/1186822.1073300","url":null,"abstract":"In this paper, we present an approach for fast subspace integration of reduced-coordinate nonlinear deformable models that is suitable for interactive applications in computer graphics and haptics. Our approach exploits dimensional model reduction to build reduced-coordinate deformable models for objects with complex geometry. We exploit the fact that model reduction on large deformation models with linear materials (as commonly used in graphics) result in internal force models that are simply cubic polynomials in reduced coordinates. Coefficients of these polynomials can be precomputed, for efficient runtime evaluation. This allows simulation of nonlinear dynamics using fast implicit Newmark subspace integrators, with subspace integration costs independent of geometric complexity. We present two useful approaches for generating low-dimensional subspace bases: modal derivatives and an interactive sketching technique. Mass-scaled principal component analysis (mass-PCA) is suggested for dimensionality reduction. Finally, several examples are given from computer animation to illustrate high performance, including force-feedback haptic rendering of a complicated object undergoing large deformations.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125388031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 475
Blister: GPU-based rendering of Boolean combinations of free-form triangulated shapes 水泡:基于gpu的自由形状三角形布尔组合渲染
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073306
John Hable, J. Rossignac
{"title":"Blister: GPU-based rendering of Boolean combinations of free-form triangulated shapes","authors":"John Hable, J. Rossignac","doi":"10.1145/1186822.1073306","DOIUrl":"https://doi.org/10.1145/1186822.1073306","url":null,"abstract":"By combining depth peeling with a linear formulation of a Boolean expression called Blist, the Blister algorithm renders an arbitrary CSG model of n primitives in at most k steps, where k is the number of depth-layers in the arrangement of the primitives. Each step starts by rendering each primitive to produce candidate surfels on the next depth-layer. Then, it renders the primitives again, one at a time, to classify the candidate surfels against the primitive and to evaluate the Boolean expression directly on the GPU. Since Blist does not expand the CSG expression into a disjunctive (sum-of-products) form, Blister has O(kn) time complexity. We explain the Blist formulation while providing algorithms for CSG-to-Blist conversion and Blist-based parallel surfel classification. We report real-time performance for nontrivial CSG models. On hardware with an 8-bit stencil buffer, we can render all possible CSG expressions with 3909 primitives.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Fourier slice photography 傅里叶切片摄影
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073256
Ren Ng
{"title":"Fourier slice photography","authors":"Ren Ng","doi":"10.1145/1186822.1073256","DOIUrl":"https://doi.org/10.1145/1186822.1073256","url":null,"abstract":"This paper contributes to the theory of photograph formation from light fields. The main result is a theorem that, in the Fourier domain, a photograph formed by a full lens aperture is a 2D slice in the 4D light field. Photographs focused at different depths correspond to slices at different trajectories in the 4D space. The paper demonstrates the utility of this theorem in two different ways. First, the theorem is used to analyze the performance of digital refocusing, where one computes photographs focused at different depths from a single light field. The analysis shows in closed form that the sharpness of refocused photographs increases linearly with directional resolution. Second, the theorem yields a Fourier-domain algorithm for digital refocusing, where we extract the appropriate 2D slice of the light field's Fourier transform, and perform an inverse 2D Fourier transform. This method is faster than previous approaches.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 486
Modeling hair from multiple views 从多个视图建模头发
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073267
Yichen Wei, E. Ofek, Long Quan, H. Shum
{"title":"Modeling hair from multiple views","authors":"Yichen Wei, E. Ofek, Long Quan, H. Shum","doi":"10.1145/1186822.1073267","DOIUrl":"https://doi.org/10.1145/1186822.1073267","url":null,"abstract":"In this paper, we propose a novel image-based approach to model hair geometry from images taken at multiple viewpoints. Unlike previous hair modeling techniques that require intensive user interactions or rely on special capturing setup under controlled illumination conditions, we use a handheld camera to capture hair images under uncontrolled illumination conditions. Our multi-view approach is natural and flexible for capturing. It also provides inherent strong and accurate geometric constraints to recover hair models.In our approach, the hair fibers are synthesized from local image orientations. Each synthesized fiber segment is validated and optimally triangulated from all visible views. The hair volume and the visibility of synthesized fibers can also be reliably estimated from multiple views. Flexibility of acquisition, little user interaction, and high quality results of recovered complex hair models are the key advantages of our method.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131232782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Cache-oblivious mesh layouts 无关缓存的网格布局
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073278
Sung-eui Yoon, Peter Lindstrom, Valerio Pascucci, Dinesh Manocha
{"title":"Cache-oblivious mesh layouts","authors":"Sung-eui Yoon, Peter Lindstrom, Valerio Pascucci, Dinesh Manocha","doi":"10.1145/1186822.1073278","DOIUrl":"https://doi.org/10.1145/1186822.1073278","url":null,"abstract":"We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes.We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2--20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133383726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 129
Modeling and visualization of leaf venation patterns 叶脉结构的建模与可视化
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073251
Adam Runions, M. Fuhrer, Brendan Lane, P. Federl, A. Rolland-Lagan, P. Prusinkiewicz
{"title":"Modeling and visualization of leaf venation patterns","authors":"Adam Runions, M. Fuhrer, Brendan Lane, P. Federl, A. Rolland-Lagan, P. Prusinkiewicz","doi":"10.1145/1186822.1073251","DOIUrl":"https://doi.org/10.1145/1186822.1073251","url":null,"abstract":"We introduce a class of biologically-motivated algorithms for generating leaf venation patterns. These algorithms simulate the interplay between three processes: (1) development of veins towards hormone (auxin) sources embedded in the leaf blade; (2) modification of the hormone source distribution by the proximity of veins; and (3) modification of both the vein pattern and source distribution by leaf growth. These processes are formulated in terms of iterative geometric operations on sets of points that represent vein nodes and auxin sources. In addition, a vein connection graph is maintained to determine vein widths. The effective implementation of the algorithms relies on the use of space subdivision (Voronoi diagrams) and time coherence between iteration steps. Depending on the specification details and parameters used, the algorithms can simulate many types of venation patterns, both open (tree-like) and closed (with loops). Applications of the presented algorithms include texture and detailed structure generation for image synthesis purposes, and modeling of morphogenetic processes in support of biological research.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133805042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 333
Adaptive dynamics of articulated bodies 铰接体的自适应动力学
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073294
Stéphane Redon, Nico Galoppo, Ming C. Lin
{"title":"Adaptive dynamics of articulated bodies","authors":"Stéphane Redon, Nico Galoppo, Ming C. Lin","doi":"10.1145/1186822.1073294","DOIUrl":"https://doi.org/10.1145/1186822.1073294","url":null,"abstract":"Forward dynamics is central to physically-based simulation and control of articulated bodies. We present an adaptive algorithm for computing forward dynamics of articulated bodies: using novel motion error metrics, our algorithm can automatically simplify the dynamics of a multi-body system, based on the desired number of degrees of freedom and the location of external forces and active joint forces. We demonstrate this method in plausible animation of articulated bodies, including a large-scale simulation of 200 animated humanoids and multi-body dynamics systems with many degrees of freedom. The graceful simplification allows us to achieve up to two orders of magnitude performance improvement in several complex benchmarks.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124401503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 107
Variational tetrahedral meshing 变分四面体网格
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073238
P. Alliez, D. Cohen-Steiner, M. Yvinec, M. Desbrun
{"title":"Variational tetrahedral meshing","authors":"P. Alliez, D. Cohen-Steiner, M. Yvinec, M. Desbrun","doi":"10.1145/1186822.1073238","DOIUrl":"https://doi.org/10.1145/1186822.1073238","url":null,"abstract":"In this paper, a novel Delaunay-based variational approach to isotropic tetrahedral meshing is presented. To achieve both robustness and efficiency, we minimize a simple mesh-dependent energy through global updates of both vertex positions and connectivity. As this energy is known to be the ∠1 distance between an isotropic quadratic function and its linear interpolation on the mesh, our minimization procedure generates well-shaped tetrahedra. Mesh design is controlled through a gradation smoothness parameter and selection of the desired number of vertices. We provide the foundations of our approach by explaining both the underlying variational principle and its geometric interpretation. We demonstrate the quality of the resulting meshes through a series of examples.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114859339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 388
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信