SIGGRAPH Asia 2013 Technical Briefs最新文献

筛选
英文 中文
Generating flow fields variations by modulating amplitude and resizing simulation space 通过调节振幅和调整模拟空间大小来产生流场变化
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542371
Syuhei Sato, Y. Dobashi, Kei Iwasaki, Hiroyuki Ochiai, Tsuyoshi Yamamoto
{"title":"Generating flow fields variations by modulating amplitude and resizing simulation space","authors":"Syuhei Sato, Y. Dobashi, Kei Iwasaki, Hiroyuki Ochiai, Tsuyoshi Yamamoto","doi":"10.1145/2542355.2542371","DOIUrl":"https://doi.org/10.1145/2542355.2542371","url":null,"abstract":"The visual simulation of fluids has become an important element in many applications, such as movies and computer games. In these applications, large-scale fluid scenes, such as fire in a village, are often simulated by repeatedly rendering multiple small-scale fluid flows. In these cases, animators are requested to generate many variations of a small-scale fluid flow. This paper presents a method to help animators meet such requirements. Our method enables the user to generate flow field variations from a single simulated dataset obtained by fluid simulation. The variations are generated in both the frequency and spatial domains. Fluid velocity fields are represented using Laplacian eigenfunctions which ensure that the flow field is always incompressible. In generating the variations in the frequency domain, we modulate the coefficients (amplitudes) of the basis functions. To generate variations in the spatial domain, our system expands or contracts the simulation space, then the flow is calculated by solving a minimization problem subject to the resized velocity field. Using our method, the user can easily create various animations from a single dataset calculated by fluid simulation.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121090084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Stochastic modeling of immersed rigid-body dynamics 浸入式刚体动力学随机建模
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542370
Haoran Xie, K. Miyata
{"title":"Stochastic modeling of immersed rigid-body dynamics","authors":"Haoran Xie, K. Miyata","doi":"10.1145/2542355.2542370","DOIUrl":"https://doi.org/10.1145/2542355.2542370","url":null,"abstract":"The simulation of immersed rigid-body dynamics involves the coupling between objects and turbulent flows, which is a complicated task in computer animation. In this paper, we propose a stochastic model of the dynamics of rigid bodies immersed in viscous flows to solve this problem. We first modulate the dynamic equations of rigid bodies using generalized Kirchhoff equations (GKE). Then, a stochastic differential equation called the Langevin equation is proposed to represent the velocity increments due to the turbulences. After the precomputation of the Kirchhoff tensor and the kinetic energy of a synthetic turbulence induced by the object moving, we utilize a fractional-step method to solve the GKE with vortical loads of drag and lift dynamics in runtime. The resulting animations include both inertial and viscous effects from the surrounding flows for arbitrary geometric objects. Our model is coherent and effective to simulate immersed rigid-body dynamics in real-time.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129910256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Progressive medial axis filtration 进行性中轴滤过
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542359
Noura Faraj, Jean-Marc Thiery, T. Boubekeur
{"title":"Progressive medial axis filtration","authors":"Noura Faraj, Jean-Marc Thiery, T. Boubekeur","doi":"10.1145/2542355.2542359","DOIUrl":"https://doi.org/10.1145/2542355.2542359","url":null,"abstract":"The Scale Axis Transform provides a parametric simplification of the Medial Axis of a 3D shape which can be seen as a hierarchical description. However, this powerful shape analysis method has a significant computational cost, requiring several minutes for a single scale on a mesh of few thousands vertices. Moreover, the scale axis can be artificially complexified at large scales, introducing new topological structures in the simplified model. In this paper, we propose a progressive medial axis simplification method inspired from surface optimization techniques which retains the geometric intuition of the scale axis transform. We compute a hierarchy of simplified medial axes by means of successive edge-collapses of the input medial axis. These operations prevent the creation of artificial tunnels that can occur in the original scale axis transform. As a result, our progressive simplification approach allows to compute the complete hierarchy of scales in a few seconds on typical input medial axes. We show how this variation of the scale axis transform impacts the resulting medial structure.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126786794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Beyond keyframe animations: a controller character-based stepping approach 超越关键帧动画:基于控制器字符的步进方法
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542368
Ben Kenwright, Chu-Chien Huang
{"title":"Beyond keyframe animations: a controller character-based stepping approach","authors":"Ben Kenwright, Chu-Chien Huang","doi":"10.1145/2542355.2542368","DOIUrl":"https://doi.org/10.1145/2542355.2542368","url":null,"abstract":"We present a controllable stepping method for procedurally generating upright biped animations in real-time for three dimensional changing environments without key-frame data. In complex virtual worlds, a character's stepping location can be limited or constrained (e.g., on stepping stones). While it is common in pendulum-based stepping techniques to calculate the foot-placement location to counteract disturbances and maintain a controlled speed while walking (e.g, the capture-point), we specify a foot location based on the terrain constraints and change the leg-length to accomplish the same goal. This allows us to precisely navigate a complex terrain while remaining responsive and robust (e.g., the ability to move the foot to a specific location at a controlled speed and trajectory and handle disruptions). We demonstrate our models ability through various simulation situations, such as, push disturbances, walking on uneven terrain, walking on stepping stones, and walking up and down stairs. The questions we aim to address are: Why do we use the inverted pendulum model? What advantages does it provide? What are its limitations? What are the different types of inverted pendulum model? How do we control the inverted pendulum? and How do we make the inverted pendulum a viable solution for generating 'controlled' character stepping animations?","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123927484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The shading probe: fast appearance acquisition for mobile AR 阴影探头:移动AR的快速外观采集
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542380
D. A. Calian, Kenny Mitchell, D. Nowrouzezahrai, J. Kautz
{"title":"The shading probe: fast appearance acquisition for mobile AR","authors":"D. A. Calian, Kenny Mitchell, D. Nowrouzezahrai, J. Kautz","doi":"10.1145/2542355.2542380","DOIUrl":"https://doi.org/10.1145/2542355.2542380","url":null,"abstract":"The ubiquity of mobile devices with powerful processors and integrated video cameras is re-opening the discussion on practical augmented reality (AR). Despite this technological convergence, several issues prevent reliable and immersive AR on these platforms. We address one such problem, the shading of virtual objects and determination of lighting that remains consistent with the surrounding environment. We design a novel light probe and exploit its structure to permit an efficient reformulation of the rendering equation that is suitable for fast shading on mobile devices. Unlike prior approaches, our shading probe directly captures the shading, and not the incident light, in a scene. As such, we avoid costly and unreliable radiometric calibration as well as side-stepping the need for complex shading algorithms. Moreover, we can tailor the shading probe's structure to better handle common lighting scenarios, such as outdoor settings. We achieve high-performance shading of virtual objects in an AR context, incorporating plausible local global-illumination effects, on mobile platforms.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130217351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Non-convex hull surfaces 非凸壳表面
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542358
G. Taubin
{"title":"Non-convex hull surfaces","authors":"G. Taubin","doi":"10.1145/2542355.2542358","DOIUrl":"https://doi.org/10.1145/2542355.2542358","url":null,"abstract":"We present a new algorithm to reconstruct approximating watertight surfaces from finite oriented point clouds. The Convex Hull (CH) of an arbitrary set of points, constructed as the intersection of all the supporting linear half spaces, is a piecewise linear watertight surface, but usually a poor approximation of the sampled surface. We introduce the Non-Convex Hull (NCH) of an oriented point cloud as the intersection of complementary supporting spherical half spaces; one per point. The boundary surface of this set is a piecewise quadratic interpolating surface, which can also be described as the zero level set of the NCH Signed Distance function. We evaluate the NCH Signed Distance function on the vertices of a volumetric mesh, regular or adaptive, and generate an approximating polygonal mesh for the NCH Surface using an isosurface algorithm. Despite its simplicity, this simple algorithm produces high quality polygon meshes competitive with those generated by state-of-the-art algorithms. The relation to the Medial Axis Transform is described.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121238994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Importance sampling for physically-based hair fiber models 基于物理的头发纤维模型的重要性采样
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542386
Eugene d'Eon, Steve Marschner, Johannes Hanika
{"title":"Importance sampling for physically-based hair fiber models","authors":"Eugene d'Eon, Steve Marschner, Johannes Hanika","doi":"10.1145/2542355.2542386","DOIUrl":"https://doi.org/10.1145/2542355.2542386","url":null,"abstract":"We present a new strategy for importance sampling hair reflectance models. To combine hair reflectance models with increasingly popular physically-based rendering algorithms, an efficient sampling scheme is required to select scattered rays that lead to lower variance and noise. Our new strategy, which is tied closely to the derivation of physically-based fiber functions, works well for both smooth and rough fibers based on the Marschner et al. model and also for Lambertian fibers. It should be directly usable with future hair reflectance models that allow for more general cross-sections and more complex surface properties, provided the lobes are derived in a similar, separable fashion. Our strategy includes lobe selection and can efficiently sample complex lobe shapes like the Marschner TRT function. The scheme is easy to implement and requires no precomputation, allowing fully heterogeneous variation of all fiber parameters.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"511 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127604944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Cross-sectional structural analysis for 3D printing optimization 面向3D打印优化的截面结构分析
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542361
Nobuyuki Umetani, Ryan M. Schmidt
{"title":"Cross-sectional structural analysis for 3D printing optimization","authors":"Nobuyuki Umetani, Ryan M. Schmidt","doi":"10.1145/2542355.2542361","DOIUrl":"https://doi.org/10.1145/2542355.2542361","url":null,"abstract":"We propose a novel cross-sectional structural analysis technique that efficiently detects critical stress inside a 3D object. We slice the object into cross-sections and compute stress based on bending momentum equilibrium. Unlike traditional approaches based on finite element methods, our method does not require a volumetric mesh or solution of linear systems, enabling interactive analysis speed. Based on the stress analysis, the orientation of an object is optimized to increase mechanical strength when manufactured with 3D printing.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126545649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 149
Parallel iso/aniso-scale surface texturing guided in Gabor space Gabor空间中引导的平行iso/aniso尺度表面纹理
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542364
Bin Sheng, Hanqiu Sun, Yubao Wu, D. Thalmann
{"title":"Parallel iso/aniso-scale surface texturing guided in Gabor space","authors":"Bin Sheng, Hanqiu Sun, Yubao Wu, D. Thalmann","doi":"10.1145/2542355.2542364","DOIUrl":"https://doi.org/10.1145/2542355.2542364","url":null,"abstract":"This paper presents a parallel texture synthesis over arbitrary surfaces, generating consistent and spatially-varying visual appearances. A novel scaling field is represented to measure the geometry-aware appearance or geometric deformation, therefore the generated textures locally agree with the geometric structure and maintain the coherence during shape deformation. And we compute the Gabor feature space for the 2D exemplars, to determine the k-coherence candidate pixels. Such Gabor feature space is not only low dimensional but capturing multi-resolution texture structures, hence the k-coherence matching guided by Gabor space improves the performance and quality of pixel similarity measurement. We directly apply multi-pass correction for each vertex according to its local neighborhood for order-independent purpose. Experimental results demonstrate that our method produces significantly improved surface textures with parallel performance.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131300467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improving robustness of Monte-Carlo global illumination with directional regularization 用方向正则化提高蒙特卡罗全局照明的鲁棒性
SIGGRAPH Asia 2013 Technical Briefs Pub Date : 2013-11-19 DOI: 10.1145/2542355.2542383
Guillaume Bouchard, J. Iehl, V. Ostromoukhov, Pierre Poulin
{"title":"Improving robustness of Monte-Carlo global illumination with directional regularization","authors":"Guillaume Bouchard, J. Iehl, V. Ostromoukhov, Pierre Poulin","doi":"10.1145/2542355.2542383","DOIUrl":"https://doi.org/10.1145/2542355.2542383","url":null,"abstract":"Directional regularization offers great potential to improve the convergence rates of Monte-Carlo-based global illumination algorithms. In this paper, we show how it can be applied successfully by combining unbiased bidirectional strategies, photon mapping, and biased directional regularization.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130592478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信