{"title":"Interactive motion correction and object manipulation","authors":"Ari Shapiro, Marcelo Kallmann, P. Faloutsos","doi":"10.1145/1230100.1230124","DOIUrl":"https://doi.org/10.1145/1230100.1230124","url":null,"abstract":"Editing recorded motions to make them suitable for different sets of environmental constraints is a general and difficult open problem. In this paper we solve a significant part of this problem by modifying full-body motions with an interactive randomized motion planner. Our method is able to synthesize collision-free motions for specified linkages of multiple animated characters in synchrony with the characters' full-body motions. The proposed method runs at interactive speed for dynamic environments of realistic complexity. We demonstrate the effectiveness of our interactive motion editing approach with two important applications: (a) motion correction (to remove collisions) and (b) synthesis of realistic object manipulation sequences on top of locomotion.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123661699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flow simulation with locally-refined LBM","authors":"Ye Zhao, Feng Qiu, Zhe Fan, A. Kaufman","doi":"10.1145/1230100.1230132","DOIUrl":"https://doi.org/10.1145/1230100.1230132","url":null,"abstract":"We simulate 3D fluid flow by a locally-refined lattice Boltzmann method (LBM) on graphics hardware. A low resolution LBM simulation running on a coarse grid models global flow behavior of the entire domain with low consumption of computational resources. For regions of interest where small visual details are desired, LBM simulations are performed on fine grids, which are separate grids superposed on the coarse one. The flow properties on boundaries of the fine grids are determined by the global simulation on the coarse grid. Thus, the locally refined fine-grid simulations follow the global fluid behavior, and model the desired small-scale and turbulent flow motion with their denser numerical discretization. A fine grid can be initiated and terminated at any time while the global simulation is running. It can also move inside the domain with a moving object to capture small-scale vortices caused by the object. Besides the performance improvement due to the adaptive simulation, the locally-refined LBM is suitable for acceleration on contemporary graphics hardware (GPU), since it involves only local and linear computations. Therefore, our approach achieves fast and adaptive 3D flow simulation for computer games and other interactive applications.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128914097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ambient aperture lighting","authors":"Christopher Oat, P. Sander","doi":"10.1145/1230100.1230111","DOIUrl":"https://doi.org/10.1145/1230100.1230111","url":null,"abstract":"This paper introduces a new real-time shading model that uses spherical cap intersections to approximate a surface's incident lighting from dynamic area light sources. Our method uses precomputed visibility information for static meshes to compute illumination with approximate high-frequency shadows in a single rendering pass. Because this technique relies on precomputed visibility data, the mesh is assumed to be static at render time. Due to its high efficiency and low memory footprint this method is highly suitable for games.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121327097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive k-d tree GPU raytracing","authors":"D. Horn, J. Sugerman, M. Houston, P. Hanrahan","doi":"10.1145/1230100.1230129","DOIUrl":"https://doi.org/10.1145/1230100.1230129","url":null,"abstract":"Over the past few years, the powerful computation rates and high memory bandwidth of GPUs have attracted efforts to run raytracing on GPUs. Our work extends Foley et al.'s GPU k-d tree research. We port their kd-restart algorithm from multi-pass, using CPU load balancing, to single pass, using current GPUs' branching and looping abilities. We introduce three optimizations: a packetized formulation, a technique for restarting partially down the tree instead of at the root, and a small, fixed-size stack that is checked before resorting to restart. Our optimized implementation achieves 15 - 18 million primary rays per second and 16 - 27 million shadow rays per second on our test scenes. Our system also takes advantage of GPUs' strengths at rasterization and shading to offer a mode where rasterization replaces eye ray scene intersection, and primary hits and local shading are produced with standard Direct3D code. For 1024x1024 renderings of our scenes with shadows and Phong shading, we achieve 12-18 frames per second. Finally, we investigate the efficiency of our implementation relative to the computational resources of our GPUs and also compare it against conventional CPUs and the Cell processor, which both have been shown to raytrace well.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126523532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometry engine optimization: cache friendly compressed representation of geometry","authors":"J. Chhugani, Subodh Kumar","doi":"10.1145/1230100.1230102","DOIUrl":"https://doi.org/10.1145/1230100.1230102","url":null,"abstract":"Recent advances in graphics architecture focus on improving texture performance and pixel processing. These have paralleled advances in rich pixel shading algorithms for realistic images. However, applications that require significantly more geometry processing than pixel processing suffer due to limited resource being devoted to the geometry processing part of the graphics pipeline. We present an algorithm to improve the effective geometry processing performance without adding significant hardware. This algorithm computes a representation for geometry that reduces the bandwidth required to transmit it to the graphics subsystem. It also reduces the total geometry processing requirement by increasing the effectiveness of the vertex cache. A goal of this algorithm is to keep the primitive assembly simple for easy hardware implementation.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125848161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Example-based model synthesis","authors":"Paul C. Merrell","doi":"10.1145/1230100.1230119","DOIUrl":"https://doi.org/10.1145/1230100.1230119","url":null,"abstract":"Model synthesis is a new approach to 3D modeling which automatically generates large models that resemble a small example model provided by the user. Model synthesis extends the 2D texture synthesis problem into higher dimensions and can be used to model many different objects and environments. The user only needs to provide an appropriate example model and does not need to provide any other instructions about how to generate the model. Model synthesis can be used to create symmetric models, models that change over time, and models that fit soft constraints. There are two important differences between our method and existing texture synthesis algorithms. The first is the use of a global search to find potential conflicts before adding new material to the model. The second difference is that we divide the problem of generating a large model into smaller subproblems which are easier to solve.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116168004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-grained level of detail using a hierarchical seamless texture atlas","authors":"K. Niski, Budirijanto Purnomo, J. Cohen","doi":"10.1145/1230100.1230127","DOIUrl":"https://doi.org/10.1145/1230100.1230127","url":null,"abstract":"Previous algorithms for view-dependent level of detail provide local mesh refinements either at the finest granularity or at a fixed, coarse granularity. The former provides triangle-level adaptation, often at the expense of heavy CPU usage and low triangle rendering throughput; the latter improves CPU usage and rendering throughput by operating on groups of triangles. We present a new multiresolution hierarchy and associated algorithms that provide adaptive granularity. This multi-grained hierarchy allows independent control of the number of hierarchy nodes processed on the CPU and the number of triangles to be rendered on the GPU. We employ a seamless texture atlas style of geometry image as a GPU-friendly data organization, enabling efficient rendering and GPU-based stitching of patch borders. We demonstrate our approach on both large triangle meshes and terrains with up to billions of vertices.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122430221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. R. Fuller, Harinarayan Krishnan, Karim Mahrous, B. Hamann, K. Joy
{"title":"Real-time procedural volumetric fire","authors":"A. R. Fuller, Harinarayan Krishnan, Karim Mahrous, B. Hamann, K. Joy","doi":"10.1145/1230100.1230131","DOIUrl":"https://doi.org/10.1145/1230100.1230131","url":null,"abstract":"We present a method for generating procedural volumetric fire in real time. By combining curve-based volumetric free-form deformation, hardware-accelerated volumetric rendering and Improved Perlin Noise or M-Noise we are able to render a vibrant and uniquely animated volumetric fire that supports bi-directional environmental macro-level interactivity. Our system is easily customizable by content artists. The fire is animated both on the macro and micro levels. Macro changes are controlled either by a prescripted sequence of movements, or by a realistic particle simulation that takes into account movement, wind, high-energy particle dispersion and thermal buoyancy. Micro fire effects such as individual flame shape, location, and flicker are generated in a pixel shader using three- to four-dimensional Improved Perlin Noise or M-Noise (depending on hardware limitations and performance requirements). Our method supports efficient collision detection, which, when combined with a sufficiently intelligent particle simulation, enables real-time bi-directional interaction between the fire and its environment. The result is a three-dimensional procedural fire that is easily designed and animated by content artists, supports dynamic interaction, and can be rendered in real time.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128824399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient histogram generation using scattering on GPUs","authors":"T. Scheuermann, J. Hensley","doi":"10.1145/1230100.1230105","DOIUrl":"https://doi.org/10.1145/1230100.1230105","url":null,"abstract":"We present an efficient algorithm to compute image histograms entirely on the GPU. Unlike previous implementations that use a gather approach, we take advantage of scattering data through the vertex shader and of high-precision blending available on modern GPUs. This results in fewer operations executed per pixel and speeds up the computation. Our approach allows us to create histograms with arbitrary numbers of buckets in a single rendering pass, and avoids the need for any communication from the GPU back to the CPU: The histogram stays in GPU memory and is immediately available for further processing. We discuss solutions to dealing with the challenges of implementing our algorithm on GPUs that have limited computational and storage precision. Finally, we provide examples of the kinds of graphics algorithms that benefit from the high performance of our histogram generation approach.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121553938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lvdi Wang, Xi Wang, Peter-Pike J. Sloan, Li-Yi Wei, Xin Tong, B. Guo
{"title":"Rendering from compressed high dynamic range textures on programmable graphics hardware","authors":"Lvdi Wang, Xi Wang, Peter-Pike J. Sloan, Li-Yi Wei, Xin Tong, B. Guo","doi":"10.1145/1230100.1230103","DOIUrl":"https://doi.org/10.1145/1230100.1230103","url":null,"abstract":"High dynamic range (HDR) images are increasingly employed in games and interactive applications for accurate rendering and illumination. One disadvantage of HDR images is their large data size; unfortunately, even though solutions have been proposed for future hardware, commodity graphics hardware today does not provide any native compression for HDR textures. In this paper, we perform extensive study of possible methods for supporting compressed HDR textures on commodity graphics hardware. A desirable solution must be implementable on DX9 generation hardware, as well as meet the following requirements. First, the data size should be small and the reconstruction quality must be good. Second, the decompression must be efficient; in particular, bilinear/trilinear/anisotropic texture filtering ought to be performed via native texture hardware instead of custom pixel shader filtering. We present a solution that optimally meets these requirements. Our basic idea is to convert a HDR texture to a custom LUVW space followed by an encoding into a pair of 8-bit DXT textures. Since DXT format is supported on modern commodity graphics hardware, our approach has wide applicability. Our compression ratio is 3:1 for FP16 inputs, allowing applications to store 3 times the number of HDR texels in the same memory footprint. Our decompressor is efficient and can be implemented as a short pixel program. We leverage existing texturing hardware for fast decompression and native texture filtering, allowing HDR textures to be utilized just like traditional 8-bit DXT textures. Our reduced data size has a further advantage: it is even faster than rendering from uncompressed HDR textures due to our reduced texture memory access. Given the quality and efficiency, we believe our approach suitable for games and interactive applications.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128892953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}