ACM SIGGRAPH 2005 Papers最新文献

筛选
英文 中文
Dual photography 双重摄影
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073257
P. Sen, Billy Chen, Gaurav Garg, Steve Marschner, M. Horowitz, M. Levoy, H. Lensch
{"title":"Dual photography","authors":"P. Sen, Billy Chen, Gaurav Garg, Steve Marschner, M. Horowitz, M. Levoy, H. Lensch","doi":"10.1145/1186822.1073257","DOIUrl":"https://doi.org/10.1145/1186822.1073257","url":null,"abstract":"We present a novel photographic technique called dual photography, which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location. The technique is completely image-based, requiring no knowledge of scene geometry or surface properties, and by its nature automatically includes all transport paths, including shadows, inter-reflections and caustics. In its simplest form, the technique can be used to take photographs without a camera; we demonstrate this by capturing a photograph using a projector and a photo-resistor. If the photo-resistor is replaced by a camera, we can produce a 4D dataset that allows for relighting with 2D incident illumination. Using an array of cameras we can produce a 6D slice of the 8D reflectance field that allows for relighting with arbitrary light fields. Since an array of cameras can operate in parallel without interference, whereas an array of light sources cannot, dual photography is fundamentally a more efficient way to capture such a 6D dataset than a system based on multiple projectors and one camera. As an example, we show how dual photography can be used to capture and relight scenes.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128813334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 306
Face transfer with multilinear models 多线性模型面部转移
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073209
Daniel Vlasic, M. Brand, H. Pfister, J. Popović
{"title":"Face transfer with multilinear models","authors":"Daniel Vlasic, M. Brand, H. Pfister, J. Popović","doi":"10.1145/1186822.1073209","DOIUrl":"https://doi.org/10.1145/1186822.1073209","url":null,"abstract":"Face Transfer is a method for mapping videorecorded performances of one individual to facial animations of another. It extracts visemes (speech-related mouth articulations), expressions, and three-dimensional (3D) pose from monocular video or film footage. These parameters are then used to generate and drive a detailed 3D textured face mesh for a target identity, which can be seamlessly rendered back into target footage. The underlying face model automatically adjusts for how the target performs facial expressions and visemes. The performance data can be easily edited to change the visemes, expressions, pose, or even the identity of the target---the attributes are separably controllable. This supports a wide variety of video rewrite and puppetry applications.Face Transfer is based on a multilinear model of 3D face meshes that separably parameterizes the space of geometric variations due to different attributes (e.g., identity, expression, and viseme). Separability means that each of these attributes can be independently varied. A multilinear model can be estimated from a Cartesian product of examples (identities × expressions × visemes) with techniques from statistical analysis, but only after careful preprocessing of the geometric data set to secure one-to-one correspondence, to minimize cross-coupling artifacts, and to fill in any missing examples. Face Transfer offers new solutions to these problems and links the estimated model with a face-tracking algorithm to extract pose, expression, and viseme parameters.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129544445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 495
Mesh-based inverse kinematics 基于网格的逆运动学
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073218
R. Sumner, Matthias Zwicker, C. Gotsman, J. Popović
{"title":"Mesh-based inverse kinematics","authors":"R. Sumner, Matthias Zwicker, C. Gotsman, J. Popović","doi":"10.1145/1186822.1073218","DOIUrl":"https://doi.org/10.1145/1186822.1073218","url":null,"abstract":"The ability to position a small subset of mesh vertices and produce a meaningful overall deformation of the entire mesh is a fundamental task in mesh editing and animation. However, the class of meaningful deformations varies from mesh to mesh and depends on mesh kinematics, which prescribes valid mesh configurations, and a selection mechanism for choosing among them. Drawing an analogy to the traditional use of skeleton-based inverse kinematics for posing skeletons. we define mesh-based inverse kinematics as the problem of finding meaningful mesh deformations that meet specified vertex constraints.Our solution relies on example meshes to indicate the class of meaningful deformations. Each example is represented with a feature vector of deformation gradients that capture the affine transformations which individual triangles undergo relative to a reference pose. To pose a mesh, our algorithm efficiently searches among all meshes with specified vertex positions to find the one that is closest to some pose in a nonlinear span of the example feature vectors. Since the search is not restricted to the span of example shapes, this produces compelling deformations even when the constraints require poses that are different from those observed in the examples. Furthermore, because the span is formed by a nonlinear blend of the example feature vectors, the blending component of our system may also be used independently to pose meshes by specifying blending weights or to compute multi-way morph sequences.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117332020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 313
Geometry-guided progressive lossless 3D mesh coding with octree (OT) decomposition 基于八叉树分解的几何导向渐进无损三维网格编码
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073237
Jingliang Peng, C.-C. Jay Kuo
{"title":"Geometry-guided progressive lossless 3D mesh coding with octree (OT) decomposition","authors":"Jingliang Peng, C.-C. Jay Kuo","doi":"10.1145/1186822.1073237","DOIUrl":"https://doi.org/10.1145/1186822.1073237","url":null,"abstract":"A new progressive lossless 3D triangular mesh encoder is proposed in this work, which can encode any 3D triangular mesh with an arbitrary topological structure. Given a mesh, the quantized 3D vertices are first partitioned into an octree (OT) structure, which is then traversed from the root and gradually to the leaves. During the traversal, each 3D cell in the tree front is subdivided into eight childcells. For each cell subdivision, both local geometry and connectivity changes are encoded, where the connectivity coding is guided by the geometry coding. Furthermore, prioritized cell subdivision is performed in the tree front to provide better rate-distortion (RD) performance. Experiments show that the proposed mesh coder outperforms the kd-tree algorithm in both geometry and connectivity coding efficiency. For the geometry coding part, the range of improvement is typically around 10%~20%, but may go up to 50%~60% for meshes with highly regular geometry data and/or tight clustering of vertices.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130752506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 204
Texture optimization for example-based synthesis 基于实例合成的纹理优化
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073263
Vivek Kwatra, Irfan Essa, A. Bobick, Nipun Kwatra
{"title":"Texture optimization for example-based synthesis","authors":"Vivek Kwatra, Irfan Essa, A. Bobick, Nipun Kwatra","doi":"10.1145/1186822.1073263","DOIUrl":"https://doi.org/10.1145/1186822.1073263","url":null,"abstract":"We present a novel technique for texture synthesis using optimization. We define a Markov Random Field (MRF)-based similarity metric for measuring the quality of synthesized texture with respect to a given input sample. This allows us to formulate the synthesis problem as minimization of an energy function, which is optimized using an Expectation Maximization (EM)-like algorithm. In contrast to most example-based techniques that do region-growing, ours is a joint optimization approach that progressively refines the entire texture. Additionally, our approach is ideally suited to allow for controllable synthesis of textures. Specifically, we demonstrate controllability by animating image textures using flow fields. We allow for general two-dimensional flow fields that may dynamically change over time. Applications of this technique include dynamic texturing of fluid animations and texture-based flow visualization.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125335604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 726
Automatic determination of facial muscle activations from sparse motion capture marker data 从稀疏运动捕捉标记数据自动确定面部肌肉激活
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073208
Eftychios Sifakis, I. Neverov, Ronald Fedkiw
{"title":"Automatic determination of facial muscle activations from sparse motion capture marker data","authors":"Eftychios Sifakis, I. Neverov, Ronald Fedkiw","doi":"10.1145/1186822.1073208","DOIUrl":"https://doi.org/10.1145/1186822.1073208","url":null,"abstract":"We built an anatomically accurate model of facial musculature, passive tissue and underlying skeletal structure using volumetric data acquired from a living male subject. The tissues are endowed with a highly nonlinear constitutive model including controllable anisotropic muscle activations based on fiber directions. Detailed models of this sort can be difficult to animate requiring complex coordinated stimulation of the underlying musculature. We propose a solution to this problem automatically determining muscle activations that track a sparse set of surface landmarks, e.g. acquired from motion capture marker data. Since the resulting animation is obtained via a three dimensional nonlinear finite element method, we obtain visually plausible and anatomically correct deformations with spatial and temporal coherence that provides robustness against outliers in the motion capture data. Moreover, the obtained muscle activations can be used in a robust simulation framework including contact and collision of the face with external objects.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116895242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 404
The VarrierTM autostereoscopic virtual reality display variertm自动立体虚拟现实显示器
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073279
D. Sandin, Todd Margolis, Jinghua Ge, Javier Girado, T. Peterka, T. DeFanti
{"title":"The VarrierTM autostereoscopic virtual reality display","authors":"D. Sandin, Todd Margolis, Jinghua Ge, Javier Girado, T. Peterka, T. DeFanti","doi":"10.1145/1186822.1073279","DOIUrl":"https://doi.org/10.1145/1186822.1073279","url":null,"abstract":"Virtual reality (VR) has long been hampered by the gear needed to make the experience possible; specifically, stereo glasses and tracking devices. Autostereoscopic display devices are gaining popularity by freeing the user from stereo glasses, however few qualify as VR displays. The Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) has designed and produced a large scale, high resolution head-tracked barrier-strip autostereoscopic display system that produces a VR immersive experience without requiring the user to wear any encumbrances. The resulting system, called Varrier, is a passive parallax barrier 35-panel tiled display that produces a wide field of view, head-tracked VR experience. This paper presents background material related to parallax barrier autostereoscopy, provides system configuration and construction details, examines Varrier interleaving algorithms used to produce the stereo images, introduces calibration and testing, and discusses the camera-based tracking subsystem.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132677868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
Visual simulation of weathering by γ-ton tracing γ-ton示踪的风化目视模拟
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073321
Yanyun Chen, Lin Xia, T. Wong, Xin Tong, H. Bao, B. Guo, H. Shum
{"title":"Visual simulation of weathering by γ-ton tracing","authors":"Yanyun Chen, Lin Xia, T. Wong, Xin Tong, H. Bao, B. Guo, H. Shum","doi":"10.1145/1186822.1073321","DOIUrl":"https://doi.org/10.1145/1186822.1073321","url":null,"abstract":"Weathering modeling introduces blemishes such as dirt, rust, cracks and scratches to virtual scenery. In this paper we present a visual stimulation technique that works well for a wide variety of weathering phenomena. Our technique, called γ-ton tracing, is based on a type of aging-inducing particles called γ-tons. Modeling a weathering effect with γ-ton tracing involves tracing a large number of γ-tons through the scene in a way similar to photon tracing and then generating the weathering effect using the recorded γ-ton transport information. With this technique, we can produce weathering effects that are customized to the scene geometry and tailored to the weathering sources. Several effects that are challenging for existing techniques can be readily captured by γ-ton tracing. These include global transport effects. or \"stainbleeding\". γ-ton tracing also enables visual simulations of complex multi-weathering effects. Lastly γ-ton tracing can generate weathering effects that not only involve texture changes but also large-scale geometry changes. We demonstrate our technique with a variety of examples.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114249717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
An approximate image-space approach for interactive refraction 交互折射的近似象空间方法
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073310
Chris Wyman
{"title":"An approximate image-space approach for interactive refraction","authors":"Chris Wyman","doi":"10.1145/1186822.1073310","DOIUrl":"https://doi.org/10.1145/1186822.1073310","url":null,"abstract":"Many interactive applications strive for realistic renderings, but framerate constraints usually limit realism to effects that run efficiently in graphics hardware. One effect largely ignored in such applications is refraction. We introduce a simple, image-space approach to refractions that easily runs on modern graphics cards. Our method requires two passes on a GPU, and allows refraction of a distant environment through two interfaces, compared to current interactive techniques that are restricted to a single interface. Like all image-based algorithms, aliasing can occur in certain circumstances, but the plausible refractions generated with our approach should suffice for many applications.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128282336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
Interactive collision detection between deformable models using chromatic decomposition 使用色分解的可变形模型之间的交互式碰撞检测
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073301
N. Govindaraju, D. Knott, Nitin Jain, I. Kabul, Rasmus Tamstorf, Russell Gayle, M. Lin, Dinesh Manocha
{"title":"Interactive collision detection between deformable models using chromatic decomposition","authors":"N. Govindaraju, D. Knott, Nitin Jain, I. Kabul, Rasmus Tamstorf, Russell Gayle, M. Lin, Dinesh Manocha","doi":"10.1145/1186822.1073301","DOIUrl":"https://doi.org/10.1145/1186822.1073301","url":null,"abstract":"We present a novel algorithm for accurately detecting all contacts, including self-collisions, between deformable models. We precompute a chromatic decomposition of a mesh into non-adjacent primitives using graph coloring algorithms. The chromatic decomposition enables us to check for collisions between non-adjacent primitives using a linear-time culling algorithm. As a result, we achieve higher culling efficiency and significantly reduce the number of false positives. We use our algorithm to check for collisions among complex deformable models consisting of tens of thousands of triangles for cloth modeling and medical simulation. Our algorithm accurately computes all contacts at interactive rates. We observed up to an order of magnitude speedup over prior methods.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128334775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 182
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信