ACM SIGGRAPH 2005 Papers最新文献

筛选
英文 中文
Motion magnification 运动放大
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073223
Ce Liu, A. Torralba, W. Freeman, F. Durand, E. Adelson
{"title":"Motion magnification","authors":"Ce Liu, A. Torralba, W. Freeman, F. Durand, E. Adelson","doi":"10.1145/1186822.1073223","DOIUrl":"https://doi.org/10.1145/1186822.1073223","url":null,"abstract":"We present motion magnification, a technique that acts like a microscope for visual motion. It can amplify subtle motions in a video sequence, allowing for visualization of deformations that would otherwise be invisible. To achieve motion magnification, we need to accurately measure visual motions, and group the pixels to be modified. After an initial image registration step, we measure motion by a robust analysis of feature point trajectories, and segment pixels based on similarity of position, color, and motion. A novel measure of motion similarity groups even very small motions according to correlation over time, which often relates to physical cause. An outlier mask marks observations not explained by our layered motion model, and those pixels are simply reproduced on the output from the original registered observations.The motion of any selected layer may be magnified by a user-specified amount; texture synthesis fills-in unseen \"holes\" revealed by the amplified motions. The resulting motion-magnified images can reveal or emphasize small motions in the original sequence, as we demonstrate with deformations in load-bearing structures, subtle motions or balancing corrections of people, and \"rigid\" structures bending under hand pressure.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128395443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 310
A practical analytic single scattering model for real time rendering 一种实用的实时绘制单散射解析模型
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073309
Bo Sun, R. Ramamoorthi, S. Narasimhan, S. Nayar
{"title":"A practical analytic single scattering model for real time rendering","authors":"Bo Sun, R. Ramamoorthi, S. Narasimhan, S. Nayar","doi":"10.1145/1186822.1073309","DOIUrl":"https://doi.org/10.1145/1186822.1073309","url":null,"abstract":"We consider real-time rendering of scenes in participating media, capturing the effects of light scattering in fog, mist and haze. While a number of sophisticated approaches based on Monte Carlo and finite element simulation have been developed, those methods do not work at interactive rates. The most common real-time methods are essentially simple variants of the OpenGL fog model. While easy to use and specify, that model excludes many important qualitative effects like glows around light sources, the impact of volumetric scattering on the appearance of surfaces such as the diffusing of glossy highlights, and the appearance under complex lighting such as environment maps. In this paper, we present an alternative physically based approach that captures these effects while maintaining real time performance and the ease-of-use of the OpenGL fog model. Our method is based on an explicit analytic integration of the single scattering light transport equations for an isotropic point light source in a homogeneous participating medium. We can implement the model in modern programmable graphics hardware using a few small numerical lookup tables stored as texture maps. Our model can also be easily adapted to generate the appearances of materials with arbitrary BRDFs, environment map lighting, and precomputed radiance transfer methods, in the presence of participating media. Hence, our techniques can be widely used in real-time rendering.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117127580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Discontinuous fluids 不连续的液体
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073283
Jeong-Mo Hong, Chang-Hun Kim
{"title":"Discontinuous fluids","authors":"Jeong-Mo Hong, Chang-Hun Kim","doi":"10.1145/1186822.1073283","DOIUrl":"https://doi.org/10.1145/1186822.1073283","url":null,"abstract":"At interfaces between different fluids, properties such as density, viscosity, and molecular cohesion are discontinuous. To animate small-scale details of incompressible viscous multi-phase fluids realistically, we focus on the discontinuities in the state variables that express these properties. Surface tension of both free and bubble surfaces is modeled using the jump condition in the pressure field; and discontinuities in the velocity gradient field. driven by viscosity differences, are also considered. To obtain derivatives of the pressure and velocity fields with sub-grid accuracy, they are extrapolated across interfaces using continuous variables based on physical properties. The numerical methods that we present are easy to implement and do not impact the performance of existing solvers. Small-scale fluid motions, such as capillary instability, breakup of liquid sheets, and bubbly water can all be successfully animated.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122118720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 194
Evaluation of tone mapping operators using a High Dynamic Range display 使用高动态范围显示器评估音调映射操作符
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073242
P. Ledda, A. Chalmers, T. Troscianko, H. Seetzen
{"title":"Evaluation of tone mapping operators using a High Dynamic Range display","authors":"P. Ledda, A. Chalmers, T. Troscianko, H. Seetzen","doi":"10.1145/1186822.1073242","DOIUrl":"https://doi.org/10.1145/1186822.1073242","url":null,"abstract":"Tone mapping operators are designed to reproduce visibility and the overall impression of brightness, contrast and color of the real world onto limited dynamic range displays and printers. Although many tone mapping operators have been published in recent years, no thorough psychophysical experiments have yet been undertaken to compare such operators against the real scenes they are purporting to depict. In this paper, we present the results of a series of psychophysical experiments to validate six frequently used tone mapping operators against linearly mapped High Dynamic Range (HDR) scenes displayed on a novel HDR device. Individual operators address the tone mapping issue using a variety of approaches and the goals of these techniques are often quite different from one another. Therefore, the purpose of this investigation was not simply to determine which is the \"best\" algorithm, but more generally to propose an experimental methodology to validate such operators and to determine the participants' impressions of the images produced compared to what is visible on a high contrast ratio display.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133205686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 343
Line drawings from volume data 根据体积数据绘制线条图
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073222
M. Burns, Janek Klawe, S. Rusinkiewicz, Adam Finkelstein, D. DeCarlo
{"title":"Line drawings from volume data","authors":"M. Burns, Janek Klawe, S. Rusinkiewicz, Adam Finkelstein, D. DeCarlo","doi":"10.1145/1186822.1073222","DOIUrl":"https://doi.org/10.1145/1186822.1073222","url":null,"abstract":"Renderings of volumetric data have become an important data analysis tool for applications ranging from medicine to scientific simulation. We propose a volumetric drawing system that directly extracts sparse linear features, such as silhouettes and suggestive contours, using a temporally coherent seed-and-traverse framework. In contrast to previous methods based on isosurfaces or nonrefractive transparency, producing these drawings requires examining an asymptotically smaller subset of the data, leading to efficiency on large data sets. In addition, the resulting imagery is often more comprehensible than standard rendering styles, since it focuses attention on important features in the data. We test our algorithms on datasets up to 5123, demonstrating interactive extraction and rendering of line drawings in a variety of drawing styles.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127873237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
Meshless deformations based on shape matching 基于形状匹配的无网格变形
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073216
Matthias Müller, Bruno Heidelberger, M. Teschner, M. Gross
{"title":"Meshless deformations based on shape matching","authors":"Matthias Müller, Bruno Heidelberger, M. Teschner, M. Gross","doi":"10.1145/1186822.1073216","DOIUrl":"https://doi.org/10.1145/1186822.1073216","url":null,"abstract":"We present a new approach for simulating deformable objects. The underlying model is geometrically motivated. It handles pointbased objects and does not need connectivity information. The approach does not require any pre-processing, is simple to compute, and provides unconditionally stable dynamic simulations.The main idea of our deformable model is to replace energies by geometric constraints and forces by distances of current positions to goal positions. These goal positions are determined via a generalized shape matching of an undeformed rest state with the current deformed state of the point cloud. Since points are always drawn towards well-defined locations, the overshooting problem of explicit integration schemes is eliminated. The versatility of the approach in terms of object representations that can be handled, the efficiency in terms of memory and computational complexity, and the unconditional stability of the dynamic simulation make the approach particularly interesting for games.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121074364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 628
Mean value coordinates for closed triangular meshes 封闭三角网格的均值坐标
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073229
T. Ju, S. Schaefer, J. Warren
{"title":"Mean value coordinates for closed triangular meshes","authors":"T. Ju, S. Schaefer, J. Warren","doi":"10.1145/1186822.1073229","DOIUrl":"https://doi.org/10.1145/1186822.1073229","url":null,"abstract":"Constructing a function that interpolates a set of values defined at vertices of a mesh is a fundamental operation in computer graphics. Such an interpolant has many uses in applications such as shading, parameterization and deformation. For closed polygons, mean value coordinates have been proven to be an excellent method for constructing such an interpolant. In this paper, we generalize mean value coordinates from closed 2D polygons to closed triangular meshes. Given such a mesh P, we show that these coordinates are continuous everywhere and smooth on the interior of P. The coordinates are linear on the triangles of P and can reproduce linear functions on the interior of P. To illustrate their usefulness, we conclude by considering several interesting applications including constructing volumetric textures and surface deformation.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121725626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 636
Geopostors: a real-time geometry/impostor crowd rendering system Geopostors:一个实时几何体/冒充者群体渲染系统
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073290
S. Dobbyn, J. Hamill, K. O'Conor, C. O'Sullivan
{"title":"Geopostors: a real-time geometry/impostor crowd rendering system","authors":"S. Dobbyn, J. Hamill, K. O'Conor, C. O'Sullivan","doi":"10.1145/1186822.1073290","DOIUrl":"https://doi.org/10.1145/1186822.1073290","url":null,"abstract":"The simulation of large crowds of humans is important in many fields of computer graphics, including real-time applications such as games, as they can breathe life into otherwise static scenes and enhance believability. Although many new games are released each year, it is very unusual to find large-scale crowds populating the environments depicted. Such applications need to deal with having limited resources available at each frame. With many hundreds or thousands of potential virtual humans in a crowd, traditional techniques rapidly become overwhelmed and are not able to sustain an interactive frame-rate. Therefore, simpler approaches to the rendering, animation and behaviour control of the crowds are needed. Additionally, these new approaches must provide for variety, as environments inhabited by carbon-copy clones can be disconcerting and unrealistic.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117130764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Defocus video matting 散焦视频抠图
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073231
M. McGuire, W. Matusik, H. Pfister, J. Hughes, F. Durand
{"title":"Defocus video matting","authors":"M. McGuire, W. Matusik, H. Pfister, J. Hughes, F. Durand","doi":"10.1145/1186822.1073231","DOIUrl":"https://doi.org/10.1145/1186822.1073231","url":null,"abstract":"Video matting is the process of pulling a high-quality alpha matte and foreground from a video sequence. Current techniques require either a known background (e.g., a blue screen) or extensive user interaction (e.g., to specify known foreground and background elements). The matting problem is generally under-constrained, since not enough information has been collected at capture time. We propose a novel, fully autonomous method for pulling a matte using multiple synchronized video streams that share a point of view but differ in their plane of focus. The solution is obtained by directly minimizing the error in filter-based image formation equations, which are over-constrained by our rich data stream. Our system solves the fully dynamic video matting problem without user assistance: both the foreground and background may be high frequency and have dynamic content, the foreground may resemble the background, and the scene is lit by natural (as opposed to polarized or collimated) illumination.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124541725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 171
A data-driven approach to quantifying natural human motion 一个数据驱动的方法来量化自然人体运动
ACM SIGGRAPH 2005 Papers Pub Date : 2005-07-01 DOI: 10.1145/1186822.1073316
Liu Ren, A. Patrick, Alexei A. Efros, J. Hodgins, James M. Rehg
{"title":"A data-driven approach to quantifying natural human motion","authors":"Liu Ren, A. Patrick, Alexei A. Efros, J. Hodgins, James M. Rehg","doi":"10.1145/1186822.1073316","DOIUrl":"https://doi.org/10.1145/1186822.1073316","url":null,"abstract":"In this paper, we investigate whether it is possible to develop a measure that quantifies the naturalness of human motion (as defined by a large database). Such a measure might prove useful in verifying that a motion editing operation had not destroyed the naturalness of a motion capture clip or that a synthetic motion transition was within the space of those seen in natural human motion. We explore the performance of mixture of Gaussians (MoG), hidden Markov models (HMM), and switching linear dynamic systems (SLDS) on this problem. We use each of these statistical models alone and as part of an ensemble of smaller statistical models. We also implement a Naive Bayes (NB) model for a baseline comparison. We test these techniques on motion capture data held out from a database, keyframed motions, edited motions, motions with noise added, and synthetic motion transitions. We present the results as receiver operating characteristic (ROC) curves and compare the results to the judgments made by subjects in a user study.","PeriodicalId":211118,"journal":{"name":"ACM SIGGRAPH 2005 Papers","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127374458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 186
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信