{"title":"Hexahedral-dominant meshing","authors":"Dmitry Sokolov, N. Ray, L. Untereiner, B. Lévy","doi":"10.1145/3072959.3126827","DOIUrl":"https://doi.org/10.1145/3072959.3126827","url":null,"abstract":"This article introduces a method that generates a hexahedral-dominant mesh from an input tetrahedral mesh. It follows a three-step pipeline similar to the one proposed by Carrier Baudoin et al.: (1) generate a frame field, (2) generate a pointset P that is mostly organized on a regular grid locally aligned with the frame field, and (3) generate the hexahedral-dominant mesh by recombining the tetrahedra obtained from the constrained Delaunay triangulation of P. For step (1), we use a state-of-the-art algorithm to generate a smooth frame field. For step (2), we introduce an extension of Periodic Global Parameterization to the volumetric case. As compared with other global parameterization methods (such as CubeCover), our method relaxes some global constraints to avoid creating degenerate elements, at the expense of introducing some singularities that are meshed using non-hexahedral elements. For step (3), we build on the formalism introduced by Meshkat and Talmor, fill in a gap in their proof, and provide a complete enumeration of all the possible recombinations, as well as an algorithm that efficiently detects all the matches in a tetrahedral mesh. The method is evaluated and compared with the state of the art on a database of examples with various mesh complexities, varying from academic examples to real industrial cases. Compared with the method of Carrier-Baudoin et al., the method results in better scores for classical quality criteria of hexahedral-dominant meshes (hexahedral proportion, scaled Jacobian, etc.). The method also shows better robustness than CubeCover and its derivatives when applied to complicated industrial models.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76822912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why New Programming Languages for Simulation?","authors":"G. Bernstein, Fredrik Kjolstad","doi":"10.1145/2930661","DOIUrl":"https://doi.org/10.1145/2930661","url":null,"abstract":"Writing highly performant simulations requires a lot of human effort to optimize for an increasingly diverse set of hardware platforms, such as multi-core CPUs, GPUs, and distributed machines. Since these optimizations cut across both the design of geometric data structures and numerical linear algebra, code reusability and portability is frequently sacrificed for performance. We believe the key to make simulation programmers more productive at developing portable and performant code is to introduce new linguistic abstractions, as in rendering and image processing. In this perspective, we distill the core ideas from our two languages, Ebb and Simit, that are published in this journal.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"26 1","pages":"20e:1-20e:3"},"PeriodicalIF":0.0,"publicationDate":"2016-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78278139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CofiFab: coarse-to-fine fabrication of large 3D objects","authors":"Peng Song, Bailin Deng, Ziqi Wang, Zhichao Dong, Wei Li, Chi-Wing Fu, Ligang Liu","doi":"10.1145/2897824.2925876","DOIUrl":"https://doi.org/10.1145/2897824.2925876","url":null,"abstract":"This paper presents CofiFab, a coarse-to-fine 3D fabrication solu- tion, combining 3D printing and 2D laser cutting for cost-effective fabrication of large objects at lower cost and higher speed. Our key approach is to first build coarse internal base structures within the given 3D object using laser cutting, and then attach thin 3D- printed parts, as an external shell, onto the base to recover the fine surface details. CofiFab achieves this with three novel algorithmic components. First, we formulate an optimization model to compute fabricatable polyhedrons of maximized volume, as the geometry of the internal base. Second, we devise a new interlocking scheme to tightly connect the laser-cut parts into a strong internal base, by iter- atively building a network of nonorthogonal joints and interlocking parts around polyhedral corners. Lastly, we optimize the partitioning of the external object shell into 3D-printable parts, while saving support material and avoiding overhangs. Besides cost saving, these components also consider aesthetics, stability and balancing. Hence, CofiFab can efficiently produce large objects by assembly. To evalu- ate CofiFab, we fabricate objects of varying shapes and sizes, and show that CofiFab can significantly outperform previous methods.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"47 1","pages":"45:1-45:11"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79733266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid Skeletal-Surface Motion Graphs for Character Animation from 4D Performance Capture","authors":"Peng Huang, M. Tejera, J. Collomosse, A. Hilton","doi":"10.1145/2699643","DOIUrl":"https://doi.org/10.1145/2699643","url":null,"abstract":"We present a novel hybrid representation for character animation from 4D Performance Capture (4DPC) data which combines skeletal control with surface motion graphs. 4DPC data are temporally aligned 3D mesh sequence reconstructions of the dynamic surface shape and associated appearance from multiple-view video. The hybrid representation supports the production of novel surface sequences which satisfy constraints from user-specified key-frames or a target skeletal motion. Motion graph path optimisation concatenates fragments of 4DPC data to satisfy the constraints while maintaining plausible surface motion at transitions between sequences. Space-time editing of the mesh sequence using a learned part-based Laplacian surface deformation model is performed to match the target skeletal motion and transition between sequences. The approach is quantitatively evaluated for three 4DPC datasets with a variety of clothing styles. Results for key-frame animation demonstrate production of novel sequences that satisfy constraints on timing and position of less than 1% of the sequence duration and path length. Evaluation of motion-capture-driven animation over a corpus of 130 sequences shows that the synthesised motion accurately matches the target skeletal motion. The combination of skeletal control with the surface motion graph extends the range and style of motion which can be produced while maintaining the natural dynamics of shape and appearance from the captured performance.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"33 1","pages":"17:1-17:14"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84521240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Vaidyanathan, Jacob Munkberg, Petrik Clarberg, Marco Salvi
{"title":"Layered Light Field Reconstruction for Defocus Blur","authors":"K. Vaidyanathan, Jacob Munkberg, Petrik Clarberg, Marco Salvi","doi":"10.1145/2699647","DOIUrl":"https://doi.org/10.1145/2699647","url":null,"abstract":"We present a novel algorithm for reconstructing high-quality defocus blur from a sparsely sampled light field. Our algorithm builds upon recent developments in the area of sheared reconstruction filters and significantly improves reconstruction quality and performance. While previous filtering techniques can be ineffective in regions with complex occlusion, our algorithm handles such scenarios well by partitioning the input samples into depth layers. These depth layers are filtered independently and then combined together, taking into account inter-layer visibility. We also introduce a new separable formulation of sheared reconstruction filters that achieves real-time preformance on a modern GPU and is more than two orders of magnitude faster than previously published techniques.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"8 1","pages":"23:1-23:12"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81287635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear Volumetric Focus for Light Field Cameras","authors":"D. Dansereau, O. Pizarro, Stefan B. Williams","doi":"10.1145/2665074","DOIUrl":"https://doi.org/10.1145/2665074","url":null,"abstract":"We demonstrate that the redundant information in light field imagery allows volumetric focus, an improvement of signal quality that maintains focus over a controllable range of depths. To do this, we derive the frequency-domain region of support of the light field, finding it to be the 4D hyperfan at the intersection of a dual fan and a hypercone, and design a filter with correspondingly shaped passband. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenslet-based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including planar focus, fan-shaped antialiasing, and nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, through murky water and particulate matter, in real-world scenarios, and evaluated using a variety of metrics. We show that the hyperfan's performance scales with aperture count, and demonstrate the inclusion of aliased components for high-quality rendering.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"1 1","pages":"15:1-15:20"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82218034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olga Diamanti, Connelly Barnes, Sylvain Paris, Eli Shechtman, O. Sorkine-Hornung
{"title":"Synthesis of Complex Image Appearance from Limited Exemplars","authors":"Olga Diamanti, Connelly Barnes, Sylvain Paris, Eli Shechtman, O. Sorkine-Hornung","doi":"10.1145/2699641","DOIUrl":"https://doi.org/10.1145/2699641","url":null,"abstract":"Editing materials in photos opens up numerous opportunities like turning an unappealing dirt ground into luscious grass and creating a comfortable wool sweater in place of a cheap t-shirt. However, such edits are challenging. Approaches such as 3D rendering and BTF rendering can represent virtually everything, but they are also data intensive and computationally expensive, which makes user interaction difficult. Leaner methods such as texture synthesis are more easily controllable by artists, but also more limited in the range of materials that they handle, for example, grass and wool are typically problematic because of their non-Lambertian reflectance and numerous self-occlusions. We propose a new approach for editing of complex materials in photographs. We extend the texture-by-numbers approach with ideas from texture interpolation. The inputs to our method are coarse user annotation maps that specify the desired output, such as the local scale of the material and the illumination direction. Our algorithm then synthesizes the output from a discrete set of annotated exemplars. A key component of our method is that it can cope with missing data, interpolating information from the available exemplars when needed. This enables production of satisfying results involving materials with complex appearance variations such as foliage, carpet, and fabric from only one or a couple of exemplar photographs.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"7 1","pages":"22:1-22:14"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85821092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Light Transport Framework for Lenslet Light Field Cameras","authors":"Chia-Kai Liang, R. Ramamoorthi","doi":"10.1145/2665075","DOIUrl":"https://doi.org/10.1145/2665075","url":null,"abstract":"Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\u0000 In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"19 1","pages":"16:1-16:19"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82528648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Material Design Using Model Reduction","authors":"Hongyi Xu, Yijing Li, Yong Chen, J. Barbič","doi":"10.1145/2699648","DOIUrl":"https://doi.org/10.1145/2699648","url":null,"abstract":"We demonstrate an interactive method to create heterogeneous continuous deformable materials on complex three-dimensional meshes. The user specifies displacements and internal elastic forces at a chosen set of mesh vertices. Our system then rapidly solves an optimization problem to compute a corresponding heterogeneous spatial distribution of material properties using the Finite Element Method (FEM) analysis. We apply our method to linear and nonlinear isotropic deformable materials. We demonstrate that solving the problem interactively in the full-dimensional space of individual tetrahedron material values is not practical. Instead, we propose a new model reduction method that projects the material space to a low-dimensional space of material modes. Our model reduction accelerates optimization by two orders of magnitude and makes the convergence much more robust, making it possible to interactively design material distributions on complex meshes. We apply our method to precise control of contact forces and control of pressure over large contact areas between rigid and deformable objects for ergonomics. Our tetrahedron-based dithering method can efficiently convert continuous material distributions into discrete ones and we demonstrate its precision via FEM simulation. We physically display our distributions using haptics, as well as demonstrate how haptics can aid in the material design. The produced heterogeneous material distributions can also be used in computer animation applications.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"308 1","pages":"18:1-18:14"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73230384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eakta Jain, Yaser Sheikh, Ariel Shamir, J. Hodgins
{"title":"Gaze-Driven Video Re-Editing","authors":"Eakta Jain, Yaser Sheikh, Ariel Shamir, J. Hodgins","doi":"10.1145/2699644","DOIUrl":"https://doi.org/10.1145/2699644","url":null,"abstract":"Given the current profusion of devices for viewing media, video content created at one aspect ratio is often viewed on displays with different aspect ratios. Many previous solutions address this problem by retargeting or resizing the video, but a more general solution would re-edit the video for the new display. Our method employs the three primary editing operations: pan, cut, and zoom. We let viewers implicitly reveal what is important in a video by tracking their gaze as they watch the video. We present an algorithm that optimizes the path of a cropping window based on the collected eyetracking data, finds places to cut, and computes the size of the cropping window. We present results on a variety of video clips, including close-up and distant shots, and stationary and moving cameras. We conduct two experiments to evaluate our results. First, we eyetrack viewers on the result videos generated by our algorithm, and second, we perform a subjective assessment of viewer preference. These experiments show that viewer gaze patterns are similar on our result videos and on the original video clips, and that viewers prefer our results to an optimized crop-and-warp algorithm.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"99 1","pages":"21:1-21:12"},"PeriodicalIF":0.0,"publicationDate":"2015-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80563292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}