Adriana Schulz, Ariel Shamir, Ilya Baran, D. Levin, Pitchaya Sitthi-amorn, W. Matusik
{"title":"Retrieval on parametric shape collections","authors":"Adriana Schulz, Ariel Shamir, Ilya Baran, D. Levin, Pitchaya Sitthi-amorn, W. Matusik","doi":"10.1145/3072959.3126792","DOIUrl":"https://doi.org/10.1145/3072959.3126792","url":null,"abstract":"While collections of parametric shapes are growing in size and use, little progress has been made on the fundamental problem of shape-based matching and retrieval for parametric shapes in a collection. The search space for such collections is both discrete (number of shapes) and continuous (parameter values). In this work, we propose representing this space using descriptors that have shown to be effective for single shape retrieval. While single shapes can be represented as points in a descriptor space, parametric shapes are mapped into larger continuous regions. For smooth descriptors, we can assume that these regions are bounded low-dimensional manifolds where the dimensionality is given by the number of shape parameters. We propose representing these manifolds with a set of primitives, namely, points and bounded tangent spaces. Our algorithm describes how to define these primitives and how to use them to construct a manifold approximation that allows accurate and fast retrieval. We perform an analysis based on curvature, boundary evaluation, and the allowed approximation error to select between primitive types. We show how to compute decision variables with no need for empirical parameter adjustments and discuss theoretical guarantees on retrieval accuracy. We validate our approach with experiments that use different types of descriptors on a collection of shapes from multiple categories.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82484100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Selgrad, Alexander Lier, Magdalena Martinek, Christoph Buchenau, M. Guthe, Franziska Kranz, Henry Schäfer, M. Stamminger
{"title":"A compressed representation for ray tracing parametric surfaces","authors":"Kai Selgrad, Alexander Lier, Magdalena Martinek, Christoph Buchenau, M. Guthe, Franziska Kranz, Henry Schäfer, M. Stamminger","doi":"10.1145/3072959.3126820","DOIUrl":"https://doi.org/10.1145/3072959.3126820","url":null,"abstract":"Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on the fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this article, we present a novel solution to this problem. We propose a compression scheme for a priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"473 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79348965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive sound propagation and rendering for large multi-source scenes","authors":"Carl Schissler, Dinesh Manocha","doi":"10.1145/3072959.3126830","DOIUrl":"https://doi.org/10.1145/3072959.3126830","url":null,"abstract":"We present an approach to generate plausible acoustic effects at interactive rates in large dynamic environments containing many sound sources. Our formulation combines listener-based backward ray tracing with sound source clustering and hybrid audio rendering to handle complex scenes. We present a new algorithm for dynamic late reverberation that performs high-order ray tracing from the listener against spherical sound sources. We achieve sublinear scaling with the number of sources by clustering distant sound sources and taking relative visibility into account. We also describe a hybrid convolution-based audio rendering technique that can process hundreds of thousands of sound paths at interactive rates. We demonstrate the performance on many indoor and outdoor scenes with up to 200 sound sources. In practice, our algorithm can compute more than 50 reflection orders at interactive rates on a multicore PC, and we observe a 5x speedup over prior geometric sound propagation algorithms.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"108 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80844159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Momentum-mapped inverted pendulum models for controlling dynamic human motions","authors":"Tae-Joung Kwon, J. Hodgins","doi":"10.1145/3072959.3126851","DOIUrl":"https://doi.org/10.1145/3072959.3126851","url":null,"abstract":"Designing a unified framework for simulating a broad variety of human behaviors has proven to be challenging. In this article, we present an approach for control system design that can generate animations of a diverse set of behaviors including walking, running, and a variety of gymnastic behaviors. We achieve this generalization with a balancing strategy that relies on a new form of inverted pendulum model (IPM), which we call the momentum-mapped IPM (MMIPM). We analyze reference motion capture data in a pre-processing step to extract the motion of the MMIPM. To compute a new motion, the controller plans a desired motion, frame by frame, based on the current pendulum state and a predicted pendulum trajectory. By tracking this time-varying trajectory, the controller creates a character that dynamically balances, changes speed, makes turns, jumps, and performs gymnastic maneuvers.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"155 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86297766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to schedule control fragments for physics-based characters using deep Q-learning","authors":"Libin Liu, J. Hodgins","doi":"10.1145/3072959.3126784","DOIUrl":"https://doi.org/10.1145/3072959.3126784","url":null,"abstract":"","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74830774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaiwen Guo, F. Xu, Tao Yu, Xiaoyang Liu, Qionghai Dai, Yebin Liu
{"title":"Real-time geometry, albedo and motion reconstruction using a single RGBD camera","authors":"Kaiwen Guo, F. Xu, Tao Yu, Xiaoyang Liu, Qionghai Dai, Yebin Liu","doi":"10.1145/3072959.3126786","DOIUrl":"https://doi.org/10.1145/3072959.3126786","url":null,"abstract":"This paper proposes a real-time method that uses a single-view RGBD input to simultaneously reconstruct a casual scene with a detailed geometry model, surface albedo, per-frame non-rigid motion and per-frame low-frequency lighting, without requiring any template or motion priors. The key observation is that accurate scene motion can be used to integrate temporal information to recover the precise appearance, whereas the intrinsic appearance can help to establish true correspondence in the temporal domain to recover motion. Based on this observation, we rst propose a shading-based scheme to leverage appearance information for motion estimation. Then, using the reconstructed motion, a volumetric albedo fusing scheme is proposed to complete and re ne the intrinsic appearance of the scene by incorporating information from multiple frames. Since the two schemes are iteratively applied during recording, the reconstructed appearance and motion become increasingly more accurate. In addition to the reconstruction results, our experiments also show that additional applications can be achieved, such as relighting, albedo editing and free-viewpoint rendering of a dynamic scene, since geometry, appearance and motion are all reconstructed by our technique.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"387 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91509987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Anderson, D. Gallup, J. Barron, Janne Kontkanen, Noah Snavely, Carlos Hernández, Sameer Agarwal, S. Seitz
{"title":"Jump: virtual reality video","authors":"Robert Anderson, D. Gallup, J. Barron, Janne Kontkanen, Noah Snavely, Carlos Hernández, Sameer Agarwal, S. Seitz","doi":"10.1145/2980179.2980257","DOIUrl":"https://doi.org/10.1145/2980179.2980257","url":null,"abstract":"We present Jump, a practical system for capturing high resolution, omnidirectional stereo (ODS) video suitable for wide scale consumption in currently available virtual reality (VR) headsets. Our system consists of a video camera built using off-the-shelf components and a fully automatic stitching pipeline capable of capturing video content in the ODS format. We have discovered and analyzed the distortions inherent to ODS when used for VR display as well as those introduced by our capture method and show that they are small enough to make this approach suitable for capturing a wide variety of scenes. Our stitching algorithm produces robust results by reducing the problem to one of pairwise image interpolation followed by compositing. We introduce novel optical flow and compositing methods designed specifically for this task. Our algorithm is temporally coherent and efficient, is currently running at scale on a distributed computing platform, and is capable of processing hours of footage each day.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"127 1","pages":"198:1-198:13"},"PeriodicalIF":0.0,"publicationDate":"2016-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75271802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Funda Durupinar, M. Kapadia, Susan Deutsch, Michael Neff, N. Badler
{"title":"Perform: perceptual approach for adding OCEAN personality to human motion using laban movement analysis","authors":"Funda Durupinar, M. Kapadia, Susan Deutsch, Michael Neff, N. Badler","doi":"10.1145/3072959.3126789","DOIUrl":"https://doi.org/10.1145/3072959.3126789","url":null,"abstract":"A major goal of research on virtual humans is the animation of expressive characters that display distinct psychological attributes. Body motion is an effective way of portraying different personalities and differentiating characters. The purpose and contribution of this work is to describe a formal, broadly applicable, procedural, and empirically grounded association between personality and body motion and apply this association to modify a given virtual human body animation that can be represented by these formal concepts. Because the body movement of virtual characters may involve different choices of parameter sets depending on the context, situation, or application, formulating a link from personality to body motion requires an intermediate step to assist generalization. For this intermediate step, we refer to Laban Movement Analysis, which is a movement analysis technique for systematically describing and evaluating human motion. We have developed an expressive human motion generation system with the help of movement experts and conducted a user study to explore how the psychologically validated OCEAN personality factors were perceived in motions with various Laban parameters. We have then applied our findings to procedurally animate expressive characters with personality, and validated the generalizability of our approach across different models and animations via another perception study.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88626190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonuniform spatial deformation of light fields by locally linear transformations","authors":"C. Birklbauer, D. Schedl, O. Bimber","doi":"10.1145/3072959.3126846","DOIUrl":"https://doi.org/10.1145/3072959.3126846","url":null,"abstract":"Light-field cameras offer new imaging possibilities compared to conventional digital cameras. However, the additional angular domain of light fields prohibits direct application of frequently used image processing algorithms, such as warping, retargeting, or stitching. We present a general and efficient framework for nonuniform light-field warping, which forms the basis for extending many of these image processing techniques to light fields. It propagates arbitrary spatial deformations defined in one light-field perspective consistently to all other perspectives by means of 4D patch matching instead of relying on explicit depth reconstruction. This allows processing light-field recordings of complex scenes with non-Lambertian properties such as transparency and refraction. We show application examples of our framework in panorama light-field imaging, light-field retargeting, and artistic manipulation of light fields.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74025172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive high-quality green-screen keying via color unmixing","authors":"Yagiz Aksoy, T. Aydin, M. Pollefeys, A. Smolic","doi":"10.1145/3072959.3126799","DOIUrl":"https://doi.org/10.1145/3072959.3126799","url":null,"abstract":"Due to the widespread use of compositing in contemporary feature films, green-screen keying has become an essential part of postproduction workflows. To comply with the ever-increasing quality requirements of the industry, specialized compositing artists spend countless hours using multiple commercial software tools, while eventually having to resort to manual painting because of the many shortcomings of these tools. Due to the sheer amount of manual labor involved in the process, new green-screen keying approaches that produce better keying results with less user interaction are welcome additions to the compositing artist’s arsenal. We found that—contrary to the common belief in the research community—production-quality green-screen keying is still an unresolved problem with its unique challenges. In this article, we propose a novel green-screen keying method utilizing a new energy minimization-based color unmixing algorithm. We present comprehensive comparisons with commercial software packages and relevant methods in literature, which show that the quality of our results is superior to any other currently available green-screen keying solution. It is important to note that, using the proposed method, these high-quality results can be generated using only one-tenth of the manual editing time that a professional compositing artist requires to process the same content having all previous state-of-the-art tools at one’s disposal.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84356865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}