{"title":"Flow-complex-based shape reconstruction from 3D curves","authors":"Bardia Sadri, Karan Singh","doi":"10.1145/2560328","DOIUrl":"https://doi.org/10.1145/2560328","url":null,"abstract":"We address the problem of shape reconstruction from a sparse unorganized collection of 3D curves, typically generated by increasingly popular 3D curve sketching applications. Experimentally, we observe that human understanding of shape from connected 3D curves is largely consistent, and informed by both topological connectivity and geometry of the curves. We thus employ the flow complex, a structure that captures aspects of input topology and geometry, in a novel algorithm to produce an intersection-free 3D triangulated shape that interpolates the input 3D curves. Our approach is able to triangulate highly nonplanar and concave curve cycles, providing a robust 3D mesh and parametric embedding for challenging 3D curve input. Our evaluation is fourfold: we show our algorithm to match designer-selected curve cycles for surfacing; we produce user-acceptable shapes for a wide range of curve inputs; we show our approach to be predictable and robust to curve addition and deletion; we compare our results to prior art.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"32 1","pages":"20:1-20:15"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85010702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Sýkora, L. Kavan, Martin Čadík, Ondrej Jamriska, Alec Jacobson, B. Whited, Maryann Simmons, O. Sorkine-Hornung
{"title":"Ink-and-ray: Bas-relief meshes for adding global illumination effects to hand-drawn characters","authors":"D. Sýkora, L. Kavan, Martin Čadík, Ondrej Jamriska, Alec Jacobson, B. Whited, Maryann Simmons, O. Sorkine-Hornung","doi":"10.1145/2591011","DOIUrl":"https://doi.org/10.1145/2591011","url":null,"abstract":"We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D hand-drawn look-and-feel. We demonstrate our approach on a varied set of hand-drawn images and animations, showing that even in comparison to ground-truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"5 1","pages":"16:1-16:15"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88921980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Painting-to-3D model alignment via discriminative visual elements","authors":"Mathieu Aubry, Bryan C. Russell, Josef Sivic","doi":"10.1145/2591009","DOIUrl":"https://doi.org/10.1145/2591009","url":null,"abstract":"This article describes a technique that can reliably align arbitrary 2D depictions of an architectural site, including drawings, paintings, and historical photographs, with a 3D model of the site. This is a tremendously difficult task, as the appearance and scene structure in the 2D depictions can be very different from the appearance and geometry of the 3D model, for example, due to the specific rendering style, drawing error, age, lighting, or change of seasons. In addition, we face a hard search problem: the number of possible alignments of the painting to a large 3D model, such as a partial reconstruction of a city, is huge. To address these issues, we develop a new compact representation of complex 3D scenes. The 3D model of the scene is represented by a small set of discriminative visual elements that are automatically learned from rendered views. Similar to object detection, the set of visual elements, as well as the weights of individual features for each element, are learned in a discriminative fashion. We show that the learned visual elements are reliably matched in 2D depictions of the scene despite large variations in rendering style (e.g., watercolor, sketch, historical photograph) and structural changes (e.g., missing scene parts, large occluders) of the scene. We demonstrate an application of the proposed approach to automatic rephotography to find an approximate viewpoint of historical paintings and photographs with respect to a 3D model of the site. The proposed alignment procedure is validated via a human user study on a new database of paintings and sketches spanning several sites. The results demonstrate that our algorithm produces significantly better alignments than several baseline methods.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"1 1","pages":"14:1-14:14"},"PeriodicalIF":0.0,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74977842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Maimone, Gordon Wetzstein, Matthew Hirsch, Douglas Lanman, R. Raskar, H. Fuchs
{"title":"Focus 3D: Compressive accommodation display","authors":"Andrew Maimone, Gordon Wetzstein, Matthew Hirsch, Douglas Lanman, R. Raskar, H. Fuchs","doi":"10.1145/2503144","DOIUrl":"https://doi.org/10.1145/2503144","url":null,"abstract":"We present a glasses-free 3D display design with the potential to provide viewers with nearly correct accommodative depth cues, as well as motion parallax and binocular cues. Building on multilayer attenuator and directional backlight architectures, the proposed design achieves the high angular resolution needed for accommodation by placing spatial light modulators about a large lens: one conjugate to the viewer's eye, and one or more near the plane of the lens. Nonnegative tensor factorization is used to compress a high angular resolution light field into a set of masks that can be displayed on a pair of commodity LCD panels. By constraining the tensor factorization to preserve only those light rays seen by the viewer, we effectively steer narrow high-resolution viewing cones into the user's eyes, allowing binocular disparity, motion parallax, and the potential for nearly correct accommodation over a wide field of view. We verify the design experimentally by focusing a camera at different depths about a prototype display, establish formal upper bounds on the design's accommodation range and diffraction-limited performance, and discuss practical limitations that must be overcome to allow the device to be used with human observers.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"11 1","pages":"153:1-153:13"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83790792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis Gkioulekas, Bei Xiao, Shuang Zhao, E. Adelson, Todd E. Zickler, K. Bala
{"title":"Understanding the role of phase function in translucent appearance","authors":"Ioannis Gkioulekas, Bei Xiao, Shuang Zhao, E. Adelson, Todd E. Zickler, K. Bala","doi":"10.1145/2516971.2516972","DOIUrl":"https://doi.org/10.1145/2516971.2516972","url":null,"abstract":"Multiple scattering contributes critically to the characteristic translucent appearance of food, liquids, skin, and crystals; but little is known about how it is perceived by human observers. This article explores the perception of translucency by studying the image effects of variations in one factor of multiple scattering: the phase function. We consider an expanded space of phase functions created by linear combinations of Henyey-Greenstein and von Mises-Fisher lobes, and we study this physical parameter space using computational data analysis and psychophysics.\u0000 Our study identifies a two-dimensional embedding of the physical scattering parameters in a perceptually meaningful appearance space. Through our analysis of this space, we find uniform parameterizations of its two axes by analytical expressions of moments of the phase function, and provide an intuitive characterization of the visual effects that can be achieved at different parts of it. We show that our expansion of the space of phase functions enlarges the range of achievable translucent appearance compared to traditional single-parameter phase function models. Our findings highlight the important role phase function can have in controlling translucent appearance, and provide tools for manipulating its effect in material design applications.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"6 1","pages":"147:1-147:19"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80129713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Patel, Veronika Soltészová, J. Nordbotten, S. Bruckner
{"title":"Instant convolution shadows for volumetric detail mapping","authors":"Daniel Patel, Veronika Soltészová, J. Nordbotten, S. Bruckner","doi":"10.1145/2492684","DOIUrl":"https://doi.org/10.1145/2492684","url":null,"abstract":"In this article, we present a method for rendering dynamic scenes featuring translucent procedural volumetric detail with all-frequency soft shadows being cast from objects residing inside the view frustum. Our approach is based on an approximation of physically correct shadows from distant Gaussian area light sources positioned behind the view plane, using iterative convolution. We present a theoretical and empirical analysis of this model and propose an efficient class of convolution kernels which provide high quality at interactive frame rates. Our GPU-based implementation supports arbitrary volumetric detail maps, requires no precomputation, and therefore allows for real-time modification of all rendering parameters.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"62 1","pages":"154:1-154:18"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91527636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katie Bassett, Ilya Baran, Johannes Schmid, M. Gross, R. Sumner
{"title":"Authoring and animating painterly characters","authors":"Katie Bassett, Ilya Baran, Johannes Schmid, M. Gross, R. Sumner","doi":"10.1145/2484238","DOIUrl":"https://doi.org/10.1145/2484238","url":null,"abstract":"Artists explore the visual style of animated characters through 2D concept art, since it affords them a nearly unlimited degree of creative freedom. Realizing the desired visual style, however, within the 3D character animation pipeline is often impossible, since artists must work within the technical limitations of the pipeline toolset. In order to expand the range of possible visual styles for digital characters, our research aims to incorporate the expressiveness afforded by 2D concept painting into the computer animation pipeline as a core component of character authoring and animation. While prior 3D painting methods focus on static geometry or simple animations, we develop tools for the more difficult task of character animation. Our system shows how 3D stroke-based paintings can be deformed using standard rigging tools. We also propose a configuration-space keyframing algorithm for authoring stroke effects that depend on scene variables such as character pose or light position. During animation, our system supports stroke-based temporal keyframing for one-off effects. Our primary technical contribution is a novel interpolation scheme for configuration-space keyframing that ensures smooth, controllable results. We demonstrate several characters authored with our system that exhibit painted effects difficult to achieve with traditional animation tools.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"74 1","pages":"156:1-156:12"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72676721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Heide, Mushfiqur Rouf, M. Hullin, B. Labitzke, W. Heidrich, A. Kolb
{"title":"High-quality computational imaging through simple lenses","authors":"Felix Heide, Mushfiqur Rouf, M. Hullin, B. Labitzke, W. Heidrich, A. Kolb","doi":"10.1145/2516971.2516974","DOIUrl":"https://doi.org/10.1145/2516971.2516974","url":null,"abstract":"Modern imaging optics are highly complex systems consisting of up to two dozen individual optical elements. This complexity is required in order to compensate for the geometric and chromatic aberrations of a single lens, including geometric distortion, field curvature, wavelength-dependent blur, and color fringing.\u0000 In this article, we propose a set of computational photography techniques that remove these artifacts, and thus allow for postcapture correction of images captured through uncompensated, simple optics which are lighter and significantly less expensive. Specifically, we estimate per-channel, spatially varying point spread functions, and perform nonblind deconvolution with a novel cross-channel term that is designed to specifically eliminate color fringing.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"25 1","pages":"149:1-149:14"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79819280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometry and context for semantic correspondences and functionality recognition in man-made 3D shapes","authors":"Hamid Laga, M. Mortara, M. Spagnuolo","doi":"10.1145/2516971.2516975","DOIUrl":"https://doi.org/10.1145/2516971.2516975","url":null,"abstract":"We address the problem of automatic recognition of functional parts of man-made 3D shapes in the presence of significant geometric and topological variations. We observe that under such challenging circumstances, the context of a part within a 3D shape provides important cues for learning the semantics of shapes. We propose to model the context as structural relationships between shape parts and use them, in addition to part geometry, as cues for functionality recognition. We represent a 3D shape as a graph interconnecting parts that share some spatial relationships. We model the context of a shape part as walks in the graph. Similarity between shape parts can then be defined as the similarity between their contexts, which in turn can be efficiently computed using graph kernels. This formulation enables us to: (1) find part-wise semantic correspondences between 3D shapes in a nonsupervised manner and without relying on user-specified textual tags, and (2) design classifiers that learn in a supervised manner the functionality of the shape components. We specifically show that the performance of the proposed context-aware similarity measure in finding part-wise correspondences outperforms geometry-only-based techniques and that contextual analysis is effective in dealing with shapes exhibiting large geometric and topological variations.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"218 1","pages":"150:1-150:16"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91197602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harmonic parameterization by electrostatics","authors":"He Wang, K. Sidorov, Peter Sandilands, T. Komura","doi":"10.1145/2503177","DOIUrl":"https://doi.org/10.1145/2503177","url":null,"abstract":"In this article, we introduce a method to apply ideas from electrostatics to parameterize the open space around an object. By simulating the object as a virtually charged conductor, we can define an object-centric coordinate system which we call Electric Coordinates. It parameterizes the outer space of a reference object in a way analogous to polar coordinates. We also introduce a measure that quantifies the extent to which an object is wrapped by a surface. This measure can be computed as the electric flux through the wrapping surface due to the electric field around the charged conductor. The electrostatic parameters, which comprise the Electric Coordinates and flux, have several applications in computer graphics, including: texturing, morphing, meshing, path planning relative to a target object, mesh parameterization, designing deformable objects, and computing coverage. Our method works for objects of arbitrary geometry and topology, and thus is applicable in a wide variety of scenarios.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"7 1","pages":"155:1-155:12"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85266743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}