{"title":"Quick transitions with cached multi-way blends","authors":"L. Ikemoto, Okan Arikan, D. Forsyth","doi":"10.1145/1230100.1230125","DOIUrl":"https://doi.org/10.1145/1230100.1230125","url":null,"abstract":"We describe a discriminative method for distinguishing natural-looking from unnatural-looking motion. Our method is based on physical and data-driven features of motion to which humans seem sensitive. We demonstrate that our technique is significantly more accurate than current alternatives. We use this technique as the testing part of a hypothesize-and-test motion synthesis procedure. The mechanism we build using this procedure can quickly provide an application with a transition of user-specified duration from any frame in a motion collection to any other frame in the collection. During pre-processing, we search all possible 2-, 3-, and 4-way blends between representative samples of motion obtained using clustering. The blends are automatically evaluated, and the recipe (i.e., the representatives and the set of weighting functions) that created the best blend is cached. At run-time, we build a transition between motions by matching a future window of the source motion to a representative, matching the past of the target motion to a representative, and then applying the blend recipe recovered from the cache to source and target motion. People seem sensitive to poor contact with the environment like sliding foot plants. We determine appropriate temporal and positional constraints for each foot plant using a novel technique, then apply an off-the-shelf inverse kinematics technique to enforce the constraints. This synthesis procedure yields good-looking transitions between distinct motions with very low online cost.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126016529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ewen Cheslack-Postava, N. Goodnight, Ren Ng, R. Ramamoorthi, G. Humphreys
{"title":"4D compression and relighting with high-resolution light transport matrices","authors":"Ewen Cheslack-Postava, N. Goodnight, Ren Ng, R. Ramamoorthi, G. Humphreys","doi":"10.1145/1230100.1230115","DOIUrl":"https://doi.org/10.1145/1230100.1230115","url":null,"abstract":"This paper presents a method for efficient compression and relighting with high-resolution, precomputed light transport matrices. We accomplish this using a 4D wavelet transform, transforming the columns of the transport matrix, in addition to the 2D row transform used in previous work. We show that a standard 4D wavelet transform can actually inflate portions of the matrix, because high-frequency lights lead to high-frequency images that cannot easily be compressed. Therefore, we present an adaptive 4D wavelet transform that terminates at a level that avoids inflation and maximizes sparsity in the matrix data. Finally, we present an algorithm for fast relighting from adaptively compressed transport matrices. Combined with a GPU-based precomputation pipeline, this results in an image and geometry relighting system that performs significantly better than 2D compression techniques, on average 2x-3x better in terms of storage cost and rendering speed for equal quality matrices.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121435253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Skinning with dual quaternions","authors":"L. Kavan, S. Collins, J. Zára, C. O'Sullivan","doi":"10.1145/1230100.1230107","DOIUrl":"https://doi.org/10.1145/1230100.1230107","url":null,"abstract":"Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this paper, we present a novel GPU-friendly skinning algorithm based on dual quaternions. We show that this approach solves the artifacts of linear blend skinning at minimal additional cost. Upgrading an existing animation system (e.g., in a videogame) from linear to dual quaternion skinning is very easy and has negligible impact on run-time performance.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114428968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time refraction through deformable objects","authors":"M. M. O. Neto, Maicon Brauwers","doi":"10.1145/1230100.1230116","DOIUrl":"https://doi.org/10.1145/1230100.1230116","url":null,"abstract":"Light refraction is an important optical phenomenon whose simulation greatly contributes to the realism of synthesized images. Although ray tracing can correctly simulate light refraction, doing it in real time still remains a challenge. This work presents an image-space technique to simulate the refraction of distant environments in real time. Contrary to previous approaches for interactive refraction at multiple interfaces, the proposed technique does not require any preprocessing. As a result, it can be directly applied to objects undergoing shape deformations, which is a common and important feature for character animation in computer games and movies. Our approach is general in the sense that it can be used with any object representation that can be rasterized on a programmable GPU. It is based on an efficient ray-intersection procedure performed against a dynamic depth map and carried out in 2D texture space. We demonstrate the effectiveness of our approach by simulating refractions through animated characters composed of several hundred thousand polygons in real time.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131072519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Louis Bavoil, Steven P. Callahan, A. Lefohn, J. Comba, Cláudio T. Silva
{"title":"Multi-fragment effects on the GPU using the k-buffer","authors":"Louis Bavoil, Steven P. Callahan, A. Lefohn, J. Comba, Cláudio T. Silva","doi":"10.1145/1230100.1230117","DOIUrl":"https://doi.org/10.1145/1230100.1230117","url":null,"abstract":"Many interactive rendering algorithms require operations on multiple fragments (i.e., ray intersections) at the same pixel location: however, current Graphics Processing Units (GPUs) capture only a single fragment per pixel. Example effects include transparency, translucency, constructive solid geometry, depth-of-field, direct volume rendering, and isosurface visualization. With current GPUs, programmers implement these effects using multiple passes over the scene geometry, often substantially limiting performance. This paper introduces a generalization of the Z-buffer, called the k-buffer, that makes it possible to efficiently implement such algorithms with only a single geometry pass, yet requires only a small, fixed amount of additional memory. The k-buffer uses framebuffer memory as a read-modify-write (RMW) pool of k entries whose use is programmatically defined by a small k-buffer program. We present two proposals for adding k-buffer support to future GPUs and demonstrate numerous multiple-fragment, single-pass graphics algorithms running on both a software-simulated k-buffer and a k-buffer implemented with current GPUs. The goal of this work is to demonstrate the large number of graphics algorithms that the k-buffer enables and that the efficiency is superior to current multipass approaches.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114826289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Moeck, Nicolas Bonneel, N. Tsingos, G. Drettakis, I. Viaud-Delmon, David Alloza
{"title":"Progressive perceptual audio rendering of complex scenes","authors":"Thomas Moeck, Nicolas Bonneel, N. Tsingos, G. Drettakis, I. Viaud-Delmon, David Alloza","doi":"10.1145/1230100.1230133","DOIUrl":"https://doi.org/10.1145/1230100.1230133","url":null,"abstract":"Despite recent advances, including sound source clustering and perceptual auditory masking, high quality rendering of complex virtual scenes with thousands of sound sources remains a challenge. Two major bottlenecks appear as the scene complexity increases: the cost of clustering itself, and the cost of pre-mixing source signals within each cluster. In this paper, we first propose an improved hierarchical clustering algorithm that remains efficient for large numbers of sources and clusters while providing progressive refinement capabilities. We then present a lossy pre-mixing method based on a progressive representation of the input audio signals and the perceptual importance of each sound source. Our quality evaluation user tests indicate that the recently introduced audio saliency map is inappropriate for this task. Consequently we propose a \"pinnacle\", loudness-based metric, which gives the best results for a variety of target computing budgets. We also performed a perceptual pilot study which indicates that in audio-visual environments, it is better to allocate more clusters to visible sound sources. We propose a new clustering metric using this result. As a result of these three solutions, our system can provide high quality rendering of thousands of 3D-sound sources on a \"gamer-style\" PC.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125699656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Kavan, R. Mcdonnell, S. Dobbyn, J. Zára, C. O'Sullivan
{"title":"Skinning arbitrary deformations","authors":"L. Kavan, R. Mcdonnell, S. Dobbyn, J. Zára, C. O'Sullivan","doi":"10.1145/1230100.1230109","DOIUrl":"https://doi.org/10.1145/1230100.1230109","url":null,"abstract":"Matrix palette skinning (also known as skeletal subspace deformation) is a very popular real-time animation technique. So far, it has only been applied to the class of quasi-articulated objects, such as moving human or animal figures. In this paper, we demonstrate how to automatically construct skinning approximations of arbitrary precomputed animations, such as those of cloth or elastic materials. In contrast to previous approaches, our method is particularly well suited to input animations without rigid components. Our transformation fitting algorithm finds optimal skinning transformations (in a least-squares sense) and therefore achieves considerably higher accuracy for non-quasi-articulated objects than previous methods. This allows the advantages of skinned animations (e.g., efficient rendering, rest-pose editing and fast collision detection) to be exploited for arbitrary deformations.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132476194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time mesh simplification using the GPU","authors":"Christopher DeCoro, Natalya Tatarchuk","doi":"10.1145/1230100.1230128","DOIUrl":"https://doi.org/10.1145/1230100.1230128","url":null,"abstract":"Recent advances in real-time rendering have allowed the GPU implementation of traditionally CPU-restricted algorithms, often with performance increases of an order of magnitude or greater. Such gains are achieved by leveraging the large-scale parallelism of the GPU towards applications that are well-suited for these streaming architectures. By contrast, mesh simplification has traditionally been viewed as a non-interactive process not readily amenable to GPU acceleration. We demonstrate how it becomes practical for real-time use through our method, and that the use of the GPU even for offline simplification leads to significant increases in performance. Our approach for mesh decimation adopts a vertex-clustering method to the GPU by taking advantage of a new addition to the rendering pipeline - the geometry shader stage. We present a novel general-purpose data structure designed for streaming architectures called the probabilistic octree, which allows for much of the flexibility of offline implementations, including sparse encoding and variable level-of-detail. We demonstrate successful use of this data structure in our GPU implementation of mesh simplification. We can generate adaptive levels of detail by applying non-linear warping functions to the cluster map in order to improve resulting simplification quality. Our GPU-accelerated approach enables simultaneous construction of multiple levels of detail and out-of-core simplification of extremely large polygonal meshes.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131086759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TileTrees","authors":"S. Lefebvre, C. Dachsbacher","doi":"10.1145/1230100.1230104","DOIUrl":"https://doi.org/10.1145/1230100.1230104","url":null,"abstract":"Texture mapping with atlases suffer from several drawbacks: Wasted memory, seams, uniform resolution and no support of implicit surfaces. Texture mapping in a volume solves most of these issues, but unfortunately it induces an important space and time overhead. To address this problem, we introduce the TileTree: A novel data structure for texture mapping surfaces. TileTrees store square texture tiles into the leaves of an octree surrounding the surface. At rendering time the surface is projected onto the tiles, and the color is retrieved by a simple 2D texture fetch into a tile map. This avoids the difficulties of global planar parameterizations while still mapping large pieces of surface to regular 2D textures. Our method is simple to implement, does not require long pre-processing time, nor any modification of the textured geometry. It is not limited to triangle meshes. The resulting texture has little distortion and is seamlessly interpolated over smooth surfaces. Our method natively supports adaptive resolution. We show that TileTrees are more compact than other volume approaches, while providing fast access to the data. We also describe an interactive painting application, enabling to create, edit and render objects without having to convert between texture representations.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128444519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time ambient occlusion for dynamic character skins","authors":"Adam G. Kirk, Okan Arikan","doi":"10.1145/1230100.1230108","DOIUrl":"https://doi.org/10.1145/1230100.1230108","url":null,"abstract":"We present a single-pass hardware accelerated method to reconstruct compressed ambient occlusion values in real-time on dynamic character skins. This method is designed to work with meshes that are deforming based on a low-dimensional set of parameters, as in character animation. The inputs to our method are rendered ambient occlusion values at the vertices of a mesh deformed into various poses, along with the corresponding degrees of freedom of those poses. The algorithm uses k-means clustering to group the degrees of freedom into a small number of pose clusters. Because the pose variation in a cluster is small, our method can define a low-dimensional pose representation using principal component analysis. Within each cluster, we approximate ambient occlusion as a linear function in the reduced-dimensional representation. When drawing the character, our method uses moving least squares to blend the reconstructed ambient occlusion values from a small number of pose clusters. This technique offers significant memory savings over storing uncompressed values, and can generate plausible ambient occlusion values for poses not seen in training. Because we are using linear functions our output is smooth, fast to evaluate, and easy to implement in a vertex or fragment shader.","PeriodicalId":140639,"journal":{"name":"Proceedings of the 2007 symposium on Interactive 3D graphics and games","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128719814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}