15th Pacific Conference on Computer Graphics and Applications (PG'07)最新文献

筛选
英文 中文
Laplacian Guided Editing, Synthesis, and Simulation 拉普拉斯指导编辑,合成和仿真
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.69
Yizhou Yu
{"title":"Laplacian Guided Editing, Synthesis, and Simulation","authors":"Yizhou Yu","doi":"10.1109/PG.2007.69","DOIUrl":"https://doi.org/10.1109/PG.2007.69","url":null,"abstract":"Summary form only given. The Laplacian has been playing a central role in numerous scientific and engineering problems. It has also become popular in computer graphics. This talk presents a series of our work that exploits the Laplacian in mesh editing, texture synthesis and flow simulation. First, a review is given on mesh editing using differential coordinates and the Poisson equation, which involves the Laplacian. The distinctive feature of this approach is that it modifies the original geometry implicitly through gradient field manipulation. This approach can produce desired results for both global and local editing operations, such as deformation, object merging, and denoising. This technique is computationally involved since it requires solving a large sparse linear system. To overcome this difficulty, an efficient multigrid algorithm specifically tailored for geometry processing has been developed. This multigrid algorithm is capable of interactively processing meshes with hundreds of thousands of vertices. In our latest work, Laplacian-based editing has been generalized to deforming mesh sequences, and efficient user interaction techniques have also been designed. Second, this talk presents a Laplacian-based method for surface texture synthesis and mixing from multiple sources. Eliminating seams among texture patches is important during texture synthesis. In our technique, it is solved by performing Laplacian texture reconstruction, which retains the high frequency details but computes new consistent low frequency components. Third, a method for inviscid flow simulation over manifold surfaces is presented. This method enforces incompressibility on closed surfaces by solving a discrete Poisson equation. Different from previous work, it performs simulations directly on triangle meshes and thus eliminates parametrization distortions.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132469144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A Data-driven Approach to Human-body Cloning Using a Segmented Body Database 基于数据驱动的分段人体数据库人体克隆方法
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.45
P. Xi, Won-Sook Lee, Chang Shu
{"title":"A Data-driven Approach to Human-body Cloning Using a Segmented Body Database","authors":"P. Xi, Won-Sook Lee, Chang Shu","doi":"10.1109/PG.2007.45","DOIUrl":"https://doi.org/10.1109/PG.2007.45","url":null,"abstract":"We present a data-driven approach to build a human body model from a single photograph by performing Principal Component Analysis (PCA) on a database of body segments. We segment a collection of human bodies to compile the required database prior to performing the analysis. Our approach then builds a single PCA for each body segment - head, left and right arms, torso and left and right legs - yielding six PCAs in total. This strategy improves on the flexibility of conventional data-driven approaches in 3D modeling and allows our approach to take variations in ethnicity, age and body posture into account. We demonstrate our approach in practice by constructing models of a Caucasian male, an Asian male and a toddler from corresponding photographs and a Caucasian adult oriented database. We also discuss rapid consistent parameterization based on Radial Basis Functions (RBF) and non-optimization based learning systems to reduce execution time.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124049862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Explicit Control of Vector Field Based Shape Deformations 基于矢量场的形状变形显式控制
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.26
W. V. Funck, H. Theisel, H. Seidel
{"title":"Explicit Control of Vector Field Based Shape Deformations","authors":"W. V. Funck, H. Theisel, H. Seidel","doi":"10.1109/PG.2007.26","DOIUrl":"https://doi.org/10.1109/PG.2007.26","url":null,"abstract":"Vector field based shape deformations (VFSD) have been introduced as an efficient method to deform shapes in a volume-preserving foldover-free manner. However, mainly simple implicitly defined shapes like spheres or cylinders have been explored as deformation tools by now. In contrast, boundary constraint modeling approaches enable the user to exactly define the support of the deformation on the surface. We present an approach to explicitly control VFSD: a scalar function together with two thresholds is placed directly on the shape to mark regions of full, zero, and blended deformation. The resulting deformation is volume-preserving and free of local self-intersections. In addition, the full deformation is steered by a 3D parametric curve and a parametric twisting function. This way our deformations appear to be a generalization of the boundary constraint modeling metaphor. We apply our approach in different scenarios. A parallelization of the computation on the GPU allows for editing high-resolution meshes at interactive speed.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121648337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Practical Global Illumination for Hair Rendering 实用的头发渲染全局照明
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.53
Cem Yuksel, E. Akleman, J. Keyser
{"title":"Practical Global Illumination for Hair Rendering","authors":"Cem Yuksel, E. Akleman, J. Keyser","doi":"10.1109/PG.2007.53","DOIUrl":"https://doi.org/10.1109/PG.2007.53","url":null,"abstract":"Both hair rendering and global illumination are known to be computationally expensive, and for this reason we see very few examples using global illumination techniques in hair rendering. In this paper, we elaborate on different simplification approaches to allow practical global illumination solutions for high quality hair rendering. We categorize light paths of a full global illumination solution, and analyze their costs and illumination contributions both theoretically and experimentally. We also propose two different implementation techniques using our novel projection based indirect illumination computation approach and state of the art ray tracing for hair. Our results show that by using our simplifications, a global illumination solution for hair is practical.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131268393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Rubber-like Exaggeration for Character Animation 橡胶般的角色动画夸张
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.25
Ji-yong Kwon, In-Kwon Lee
{"title":"Rubber-like Exaggeration for Character Animation","authors":"Ji-yong Kwon, In-Kwon Lee","doi":"10.1109/PG.2007.25","DOIUrl":"https://doi.org/10.1109/PG.2007.25","url":null,"abstract":"Motion capture cannot generate cartoon-style animation directly. We emulate the rubber-like exaggerations common in traditional character animation as a means of converting motion capture data into cartoon-like movement. We achieve this using trajectory-based motion exaggeration while allowing the violation of link-length constraints. We extend this technique to obtain smooth, rubber-like motion by dividing the original links into shorter sub-links and computing the positions of joints using B´ezier curve interpolation and a mass-spring simulation. This method is fast enough to be used in real time.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123396398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Cross-Parameterization for Triangular Meshes with Semantic Features 具有语义特征的三角网格交叉参数化
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.21
Shun Matsui, Kohta Aoki, H. Nagahashi, K. Morooka
{"title":"Cross-Parameterization for Triangular Meshes with Semantic Features","authors":"Shun Matsui, Kohta Aoki, H. Nagahashi, K. Morooka","doi":"10.1109/PG.2007.21","DOIUrl":"https://doi.org/10.1109/PG.2007.21","url":null,"abstract":"In 3D computer graphics, mesh parameterization is a key technique for digital geometry processings(DGP) such as morphing, shape blending, texture transfer, re-meshing and so on. This paper proposes a novel approach for parameterizing a mesh into another one directly. The main idea of our method is to combine a competitive learning and a leastsquare mesh techniques. It is enough to give some semantic feature correspondences between target meshes, even if they are in different shapes or in different poses. We show the effectiveness of our approach by giving some examples of its applications.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126024109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 157
The Soft Shadow Occlusion Camera 软阴影遮挡相机
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.23
Qi Mo, V. Popescu, Chris Wyman
{"title":"The Soft Shadow Occlusion Camera","authors":"Qi Mo, V. Popescu, Chris Wyman","doi":"10.1109/PG.2007.23","DOIUrl":"https://doi.org/10.1109/PG.2007.23","url":null,"abstract":"A fundamental challenge for existing shadow map based algorithms is dealing with partially illuminated surfaces. A conventional shadow map built with a pinhole camera only determines a binary light visibility at each point, and this all-or-nothing approach to visibility does not capture penumbral regions. We present an interactive soft shadow algorithm based on a variant of the depth discontinuity occlusion camera, a non-pinhole camera with rays that reach around blockers to sample normally hidden surfaces. Our soft shadow occlusion camera (SSOC) classifies a fragment on a continuum from fully visible to fully hidden, as seen from the light. The SSOC is used directly in fragment illumination computation without building an explicit \"soft shadow map.\" This method renders plausible soft shadows at interactive speeds under fully dynamic conditions.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127395540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Towards Digital Refocusing from a Single Photograph 从单张照片走向数码重调焦
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.22
Yosuke Bando, T. Nishita
{"title":"Towards Digital Refocusing from a Single Photograph","authors":"Yosuke Bando, T. Nishita","doi":"10.1109/PG.2007.22","DOIUrl":"https://doi.org/10.1109/PG.2007.22","url":null,"abstract":"This paper explores an image processing method for synthesizing refocused images from a single input photograph containing some defocus blur. First, we restore a sharp image by estimating and removing spatially-variant defocus blur in an input photograph. To do this, we propose a local blur estimation method able to handle abrupt blur changes at depth discontinuities in a scene, and we also present an efficient blur removal method that significantly speeds up the existing deconvolution algorithm. Once a sharp image is restored, refocused images can be interactively created by adding different defocus blur to it based on the estimated blur, so that users can intuitively change focus and depth-of-field of the input photograph. Although information available from a single photograph is highly insufficient for fully correct refocusing, the results show that visually plausible refocused images can be obtained.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134076623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Fast Hydraulic Erosion Simulation and Visualization on GPU 基于GPU的快速水力侵蚀仿真与可视化
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.15
Xing Mei, Philippe Decaudin, Bao-Gang Hu
{"title":"Fast Hydraulic Erosion Simulation and Visualization on GPU","authors":"Xing Mei, Philippe Decaudin, Bao-Gang Hu","doi":"10.1109/PG.2007.15","DOIUrl":"https://doi.org/10.1109/PG.2007.15","url":null,"abstract":"Natural mountains and valleys are gradually eroded by rainfall and river flows. Physically-based modeling of this complex phenomenon is a major concern in producing realistic synthesized terrains. However, despite some recent improvements, existing algorithms are still computationally expensive, leading to a time-consuming process fairly impractical for terrain designers and 3D artists. In this paper, we present a new method to model the hydraulic erosion phenomenon which runs at interactive rates on today's computers. The method is based on the velocity field of the running water, which is created with an efficient shallow-water fluid model. The velocity field is used to calculate the erosion and deposition process, and the sediment transportation process. The method has been carefully designed to be implemented totally on GPU, and thus takes full advantage of the parallelism of current graphics hardware. Results from experiments demonstrate that the proposed method is effective and efficient. It can create realistic erosion effects by rainfall and river flows, and produce fast simulation results for terrains with large sizes.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124233586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 114
Radiometric Compensation through Inverse Light Transport 通过逆光输运的辐射补偿
15th Pacific Conference on Computer Graphics and Applications (PG'07) Pub Date : 2007-10-29 DOI: 10.1109/PG.2007.47
Gordon Wetzstein, O. Bimber
{"title":"Radiometric Compensation through Inverse Light Transport","authors":"Gordon Wetzstein, O. Bimber","doi":"10.1109/PG.2007.47","DOIUrl":"https://doi.org/10.1109/PG.2007.47","url":null,"abstract":"Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115857998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信