C. Siegl, Matteo Colaianni, M. Stamminger, F. Bauer
{"title":"Stray-light compensation in dynamic projection mapping","authors":"C. Siegl, Matteo Colaianni, M. Stamminger, F. Bauer","doi":"10.1145/3005358.3005364","DOIUrl":"https://doi.org/10.1145/3005358.3005364","url":null,"abstract":"Projection based mixed-reality is an effective tool to create immersive visualizations on real-world objects. This is used in a wide range of applications like art-installations, education, stage shows and advertising. In this work, we enhance a multi-projector system for dynamic projection mapping, by handling various physical stray-light effects: interreflection, projector black-level and environment light in real-time for dynamic scenes. We show how all these effects can be efficiently simulated and accounted for at run time, resulting in significantly improved projection mapping results.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121307413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blending texture features from multiple reference images for style transfer","authors":"Hikaru Ikuta, Keisuke Ogaki, Yuri Odagiri","doi":"10.1145/3005358.3005388","DOIUrl":"https://doi.org/10.1145/3005358.3005388","url":null,"abstract":"We present an algorithm that learns a desired style of artwork from a collection of images and transfers this style to an arbitrary image. Our method is based on the observation that the style of artwork is not characterized by the features of one work, but rather by the features that commonly appear within a collection of works. To learn such a representation of style, a sufficiently large dataset of images created in the same style is necessary. We present a novel illustration dataset that contains 500,000 images mainly consisting of digital paintings, annotated with rich information such as tags, comments, etc. We utilize a feature space constructed from statistical properties of CNN feature responses, and represent the style as a closed region within the feature space. We present experimental results that show the closed region is capable of synthesizing an appropriate texture that belongs to the desired style, and is capable of transferring the synthesized texture to a given input image.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125401533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Spielmann, Andreas Schuster, Kai Götz, V. Helzle
{"title":"VPET: a toolset for collaborative virtual filmmaking","authors":"S. Spielmann, Andreas Schuster, Kai Götz, V. Helzle","doi":"10.1145/3005358.3005370","DOIUrl":"https://doi.org/10.1145/3005358.3005370","url":null,"abstract":"Over the last decades the process of filmmaking has been subject to constant virtualization. Empty green screen stages leave the entire on-set crew clueless as real props are often replaced with virtual elements in later stages of production. With the development of virtual production workflows, solutions that enable the decision-makers to explore the virtually augmented reality have been introduced. However, current environments are either proprietary or lack usability, particularly when used by filmmakers without a specialized knowledge of computer graphics and 3D software. As part of the EU funded project Dreamspace, we have developed VPET (Virtual Production Editing Tool), a holistic approach for established film pipelines that allow on-set light, asset and animation editing via an intuitive interface. VPET is a tablet-based on-set editing application that works within a real-time virtual production environment. It is designed to run on mobile and head mounted devices (HMD), and communicates through a network interface with Digital Content Creation (DCC) tools and other VPET clients. The tool also provides functionality to interact with digital assets during a film production and synchronises changes within the film pipeline. This work represents a novel approach to interact collaboratively with film assets in real-time by maintaining fundamental parts of production pipelines. Our vision is to establish an on-set situation comparable to the early days of filmmaking where all creative decisions were made directly on set. Additionally, this will contribute to the democratisation of virtual production.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130456218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X. Liang, Zhuo Su, Yiqi Xiao, Jiaming Guo, Xiaonan Luo
{"title":"Deep patch-wise colorization model for grayscale images","authors":"X. Liang, Zhuo Su, Yiqi Xiao, Jiaming Guo, Xiaonan Luo","doi":"10.1145/3005358.3005375","DOIUrl":"https://doi.org/10.1145/3005358.3005375","url":null,"abstract":"To handle the colorization problem, we propose a deep patch-wise colorization model for grayscale images. Distinguished with some constructive color mapping models with complicated mathematical priors, we alternately apply two loss metric functions in the deep model to suppress the training errors under the convolutional neural network. To address the potential boundary artifacts, a refinement scheme is presented inspired by guided filtering. In the experiment section, we summarize our network parameters setting in practice, including the patch size, amount of layers and the convolution kernels. Our experiments demonstrate this model can output more satisfactory visual colorizations compared with the state-of-the-art methods. Moreover, we prove our method has extensive application domains and can be applied to stylistic colorization.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115011222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Li, Zhe Huang, C. Zou, Chiew-Lan Tai, Rynson W. H. Lau, Hao Zhang, P. Tan, Hongbo Fu
{"title":"Model-driven sketch reconstruction with structure-oriented retrieval","authors":"Lei Li, Zhe Huang, C. Zou, Chiew-Lan Tai, Rynson W. H. Lau, Hao Zhang, P. Tan, Hongbo Fu","doi":"10.1145/3005358.3005372","DOIUrl":"https://doi.org/10.1145/3005358.3005372","url":null,"abstract":"We propose an interactive system that aims at lifting a 2D sketch into a 3D sketch with the help of existing models in shape collections. The key idea is to exploit part structure for shape retrieval and sketch reconstruction. We adopt sketch-based shape retrieval and develop a novel matching algorithm which considers structure in addition to traditional shape features. From a list of retrieved models, users select one to serve as a 3D proxy, providing abstract 3D information. Then our reconstruction method transforms the sketch into 3D geometry by back-projection, followed by an optimization procedure based on the Laplacian mesh deformation framework. Preliminary evaluations show that our retrieval algorithm is more effective than a state-of-the-art method and users can create interesting 3D forms of sketches without precise drawing skills.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124962236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Display tracking using blended images with unknown mixing ratio as a template","authors":"Akifumi Goto, S. Kagami, K. Hashimoto","doi":"10.1145/3005358.3005381","DOIUrl":"https://doi.org/10.1145/3005358.3005381","url":null,"abstract":"This paper describes a display tracking method employing blended multiple images with unknown mixing ratio as a template, which estimates the geometrical transformation and mixing ratio simultaneously. We propose a fast computational algorithm for the above problem that enables high-frame-rate visual tracking. We demonstrate an application to fast tracking projection of a grayscale image by a high-speed DLP (Digital Light Processing) projector, in which the image is composed of multiple bit planes, and an application to tracking of a movie displayed in a liquid crystal display panel, in which the movie is composed of multiple grayscale images.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shibiao Xu, Longquan Dai, Jiguang Zhang, Jinhui Tang, G. H. Kumar, Yanning Zhang, Xiaopeng Zhang
{"title":"Joint depth map interpolation and segmentation with planar surface model","authors":"Shibiao Xu, Longquan Dai, Jiguang Zhang, Jinhui Tang, G. H. Kumar, Yanning Zhang, Xiaopeng Zhang","doi":"10.1145/3005358.3005365","DOIUrl":"https://doi.org/10.1145/3005358.3005365","url":null,"abstract":"Depth map interpolation and segmentation has been a long-standing problem in computer vision. However, many people treat them as two independent problems. Indeed, the two problems are complementary. The results of one problem can aid in improving the results of the other in powerful ways. Assuming that the depth map consists of planar surfaces, we propose a unified variational formula for joint depth map interpolation and segmentation. Specifically, our model uses a multi-label representation of the depth map, where each label corresponds to a parametric representation of the planar surface on a segment. Using alternating direction method, we are able to find the minimal solution. Experiments show our algorithm outperforms other methods.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic generation of large-scale handwriting fonts via style learning","authors":"Z. Lian, Bo Zhao, Jianguo Xiao","doi":"10.1145/3005358.3005371","DOIUrl":"https://doi.org/10.1145/3005358.3005371","url":null,"abstract":"Generating personal handwriting fonts with large amounts of characters is a boring and time-consuming task. Take Chinese fonts as an example, the official standard GB18030-2000 for commercial font products contains 27533 simplified Chinese characters. Consistently and correctly writing out such huge amounts of characters is usually an impossible mission for ordinary people. To solve this problem, we propose a handy system to automatically synthesize personal handwritings for all characters (e.g., Chinese) in the font library by learning style from a small number (as few as 1%) of carefully-selected samples written by an ordinary person. Experiments including Turing tests with 69 participants demonstrate that the proposed system generates high-quality synthesis results which are indistinguishable from original handwritings. Using our system, for the first time the practical handwriting font library in a user's personal style with arbitrarily large numbers of Chinese characters can be generated automatically.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114681140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Zhang, Vivek Jadye, C. Escher, Peter Wonka, Yue Zhang, Xiaofei Gao
{"title":"Horizon measures: a novel view-independent shape descriptor","authors":"E. Zhang, Vivek Jadye, C. Escher, Peter Wonka, Yue Zhang, Xiaofei Gao","doi":"10.1145/3005358.3005390","DOIUrl":"https://doi.org/10.1145/3005358.3005390","url":null,"abstract":"In this paper we seek to answer the following question: where do contour lines and visible contour lines (silhouette) tend to occur in a 3D surface. Our study leads to two novel shape descriptors, the horizon measure and the visible horizon measure, which we apply to the visualization of 3D shapes including archeological artifacts. In addition to introducing the shape descriptors, we also provide a closed-form formula for the horizon measure based on classical spherical geometry. To compute the visible horizon measure, which depends on the exact computation of the surface visibility function, we instead of provide an image-based approach which can process a model with high complexity within a few minutes.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122228319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taemin Lee, Hyunki Lim, Dae-Won Kim, Sunkyu Hwang, K. Yoon
{"title":"System for matching paintings with music based on emotions","authors":"Taemin Lee, Hyunki Lim, Dae-Won Kim, Sunkyu Hwang, K. Yoon","doi":"10.1145/3005358.3005366","DOIUrl":"https://doi.org/10.1145/3005358.3005366","url":null,"abstract":"People experience various emotions when they interact with artistic content such as music and visual art in the form of paintings. Thus, painters and composers use features in music and paintings to influence people emotionally. An analysis of methods employed to create features to influence people using paintings and music indicated that people apparently do not find it difficult to understand artistic content. When people view paintings, listening to music that creates a mood similar to that portrayed by the paintings could be helpful to understand the painter's intention. In this work, we extract the emotions from music and paintings depending on their features. Based on these extracted emotions, the proposed system suggests the most appropriate music to accompany a given image, and vice versa. In addition, based on our algorithm, we developed a mobile application that could assist people to enjoy music and paintings emotionally.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"911 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132665039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}