SIGGRAPH ASIA 2016 Technical Briefs最新文献

筛选
英文 中文
Stray-light compensation in dynamic projection mapping 动态投影映射中的杂散光补偿
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005364
C. Siegl, Matteo Colaianni, M. Stamminger, F. Bauer
{"title":"Stray-light compensation in dynamic projection mapping","authors":"C. Siegl, Matteo Colaianni, M. Stamminger, F. Bauer","doi":"10.1145/3005358.3005364","DOIUrl":"https://doi.org/10.1145/3005358.3005364","url":null,"abstract":"Projection based mixed-reality is an effective tool to create immersive visualizations on real-world objects. This is used in a wide range of applications like art-installations, education, stage shows and advertising. In this work, we enhance a multi-projector system for dynamic projection mapping, by handling various physical stray-light effects: interreflection, projector black-level and environment light in real-time for dynamic scenes. We show how all these effects can be efficiently simulated and accounted for at run time, resulting in significantly improved projection mapping results.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121307413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Blending texture features from multiple reference images for style transfer 混合纹理特征从多个参考图像的风格转移
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005388
Hikaru Ikuta, Keisuke Ogaki, Yuri Odagiri
{"title":"Blending texture features from multiple reference images for style transfer","authors":"Hikaru Ikuta, Keisuke Ogaki, Yuri Odagiri","doi":"10.1145/3005358.3005388","DOIUrl":"https://doi.org/10.1145/3005358.3005388","url":null,"abstract":"We present an algorithm that learns a desired style of artwork from a collection of images and transfers this style to an arbitrary image. Our method is based on the observation that the style of artwork is not characterized by the features of one work, but rather by the features that commonly appear within a collection of works. To learn such a representation of style, a sufficiently large dataset of images created in the same style is necessary. We present a novel illustration dataset that contains 500,000 images mainly consisting of digital paintings, annotated with rich information such as tags, comments, etc. We utilize a feature space constructed from statistical properties of CNN feature responses, and represent the style as a closed region within the feature space. We present experimental results that show the closed region is capable of synthesizing an appropriate texture that belongs to the desired style, and is capable of transferring the synthesized texture to a given input image.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125401533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
VPET: a toolset for collaborative virtual filmmaking VPET:协作虚拟电影制作的工具集
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005370
S. Spielmann, Andreas Schuster, Kai Götz, V. Helzle
{"title":"VPET: a toolset for collaborative virtual filmmaking","authors":"S. Spielmann, Andreas Schuster, Kai Götz, V. Helzle","doi":"10.1145/3005358.3005370","DOIUrl":"https://doi.org/10.1145/3005358.3005370","url":null,"abstract":"Over the last decades the process of filmmaking has been subject to constant virtualization. Empty green screen stages leave the entire on-set crew clueless as real props are often replaced with virtual elements in later stages of production. With the development of virtual production workflows, solutions that enable the decision-makers to explore the virtually augmented reality have been introduced. However, current environments are either proprietary or lack usability, particularly when used by filmmakers without a specialized knowledge of computer graphics and 3D software. As part of the EU funded project Dreamspace, we have developed VPET (Virtual Production Editing Tool), a holistic approach for established film pipelines that allow on-set light, asset and animation editing via an intuitive interface. VPET is a tablet-based on-set editing application that works within a real-time virtual production environment. It is designed to run on mobile and head mounted devices (HMD), and communicates through a network interface with Digital Content Creation (DCC) tools and other VPET clients. The tool also provides functionality to interact with digital assets during a film production and synchronises changes within the film pipeline. This work represents a novel approach to interact collaboratively with film assets in real-time by maintaining fundamental parts of production pipelines. Our vision is to establish an on-set situation comparable to the early days of filmmaking where all creative decisions were made directly on set. Additionally, this will contribute to the democratisation of virtual production.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130456218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deep patch-wise colorization model for grayscale images 灰度图像的深度逐块着色模型
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005375
X. Liang, Zhuo Su, Yiqi Xiao, Jiaming Guo, Xiaonan Luo
{"title":"Deep patch-wise colorization model for grayscale images","authors":"X. Liang, Zhuo Su, Yiqi Xiao, Jiaming Guo, Xiaonan Luo","doi":"10.1145/3005358.3005375","DOIUrl":"https://doi.org/10.1145/3005358.3005375","url":null,"abstract":"To handle the colorization problem, we propose a deep patch-wise colorization model for grayscale images. Distinguished with some constructive color mapping models with complicated mathematical priors, we alternately apply two loss metric functions in the deep model to suppress the training errors under the convolutional neural network. To address the potential boundary artifacts, a refinement scheme is presented inspired by guided filtering. In the experiment section, we summarize our network parameters setting in practice, including the patch size, amount of layers and the convolution kernels. Our experiments demonstrate this model can output more satisfactory visual colorizations compared with the state-of-the-art methods. Moreover, we prove our method has extensive application domains and can be applied to stylistic colorization.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115011222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Model-driven sketch reconstruction with structure-oriented retrieval 面向结构检索的模型驱动草图重建
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005372
Lei Li, Zhe Huang, C. Zou, Chiew-Lan Tai, Rynson W. H. Lau, Hao Zhang, P. Tan, Hongbo Fu
{"title":"Model-driven sketch reconstruction with structure-oriented retrieval","authors":"Lei Li, Zhe Huang, C. Zou, Chiew-Lan Tai, Rynson W. H. Lau, Hao Zhang, P. Tan, Hongbo Fu","doi":"10.1145/3005358.3005372","DOIUrl":"https://doi.org/10.1145/3005358.3005372","url":null,"abstract":"We propose an interactive system that aims at lifting a 2D sketch into a 3D sketch with the help of existing models in shape collections. The key idea is to exploit part structure for shape retrieval and sketch reconstruction. We adopt sketch-based shape retrieval and develop a novel matching algorithm which considers structure in addition to traditional shape features. From a list of retrieved models, users select one to serve as a 3D proxy, providing abstract 3D information. Then our reconstruction method transforms the sketch into 3D geometry by back-projection, followed by an optimization procedure based on the Laplacian mesh deformation framework. Preliminary evaluations show that our retrieval algorithm is more effective than a state-of-the-art method and users can create interesting 3D forms of sketches without precise drawing skills.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124962236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Display tracking using blended images with unknown mixing ratio as a template 使用未知混合比例的混合图像作为模板显示跟踪
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005381
Akifumi Goto, S. Kagami, K. Hashimoto
{"title":"Display tracking using blended images with unknown mixing ratio as a template","authors":"Akifumi Goto, S. Kagami, K. Hashimoto","doi":"10.1145/3005358.3005381","DOIUrl":"https://doi.org/10.1145/3005358.3005381","url":null,"abstract":"This paper describes a display tracking method employing blended multiple images with unknown mixing ratio as a template, which estimates the geometrical transformation and mixing ratio simultaneously. We propose a fast computational algorithm for the above problem that enables high-frame-rate visual tracking. We demonstrate an application to fast tracking projection of a grayscale image by a high-speed DLP (Digital Light Processing) projector, in which the image is composed of multiple bit planes, and an application to tracking of a movie displayed in a liquid crystal display panel, in which the movie is composed of multiple grayscale images.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Joint depth map interpolation and segmentation with planar surface model 平面模型联合深度图插值与分割
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005365
Shibiao Xu, Longquan Dai, Jiguang Zhang, Jinhui Tang, G. H. Kumar, Yanning Zhang, Xiaopeng Zhang
{"title":"Joint depth map interpolation and segmentation with planar surface model","authors":"Shibiao Xu, Longquan Dai, Jiguang Zhang, Jinhui Tang, G. H. Kumar, Yanning Zhang, Xiaopeng Zhang","doi":"10.1145/3005358.3005365","DOIUrl":"https://doi.org/10.1145/3005358.3005365","url":null,"abstract":"Depth map interpolation and segmentation has been a long-standing problem in computer vision. However, many people treat them as two independent problems. Indeed, the two problems are complementary. The results of one problem can aid in improving the results of the other in powerful ways. Assuming that the depth map consists of planar surfaces, we propose a unified variational formula for joint depth map interpolation and segmentation. Specifically, our model uses a multi-label representation of the depth map, where each label corresponds to a parametric representation of the planar surface on a segment. Using alternating direction method, we are able to find the minimal solution. Experiments show our algorithm outperforms other methods.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic generation of large-scale handwriting fonts via style learning 通过风格学习自动生成大规模手写字体
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005371
Z. Lian, Bo Zhao, Jianguo Xiao
{"title":"Automatic generation of large-scale handwriting fonts via style learning","authors":"Z. Lian, Bo Zhao, Jianguo Xiao","doi":"10.1145/3005358.3005371","DOIUrl":"https://doi.org/10.1145/3005358.3005371","url":null,"abstract":"Generating personal handwriting fonts with large amounts of characters is a boring and time-consuming task. Take Chinese fonts as an example, the official standard GB18030-2000 for commercial font products contains 27533 simplified Chinese characters. Consistently and correctly writing out such huge amounts of characters is usually an impossible mission for ordinary people. To solve this problem, we propose a handy system to automatically synthesize personal handwritings for all characters (e.g., Chinese) in the font library by learning style from a small number (as few as 1%) of carefully-selected samples written by an ordinary person. Experiments including Turing tests with 69 participants demonstrate that the proposed system generates high-quality synthesis results which are indistinguishable from original handwritings. Using our system, for the first time the practical handwriting font library in a user's personal style with arbitrarily large numbers of Chinese characters can be generated automatically.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114681140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Horizon measures: a novel view-independent shape descriptor 视界测量:一种新的与视图无关的形状描述符
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005390
E. Zhang, Vivek Jadye, C. Escher, Peter Wonka, Yue Zhang, Xiaofei Gao
{"title":"Horizon measures: a novel view-independent shape descriptor","authors":"E. Zhang, Vivek Jadye, C. Escher, Peter Wonka, Yue Zhang, Xiaofei Gao","doi":"10.1145/3005358.3005390","DOIUrl":"https://doi.org/10.1145/3005358.3005390","url":null,"abstract":"In this paper we seek to answer the following question: where do contour lines and visible contour lines (silhouette) tend to occur in a 3D surface. Our study leads to two novel shape descriptors, the horizon measure and the visible horizon measure, which we apply to the visualization of 3D shapes including archeological artifacts. In addition to introducing the shape descriptors, we also provide a closed-form formula for the horizon measure based on classical spherical geometry. To compute the visible horizon measure, which depends on the exact computation of the surface visibility function, we instead of provide an image-based approach which can process a model with high complexity within a few minutes.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122228319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
System for matching paintings with music based on emotions 基于情感的绘画与音乐匹配系统
SIGGRAPH ASIA 2016 Technical Briefs Pub Date : 2016-11-28 DOI: 10.1145/3005358.3005366
Taemin Lee, Hyunki Lim, Dae-Won Kim, Sunkyu Hwang, K. Yoon
{"title":"System for matching paintings with music based on emotions","authors":"Taemin Lee, Hyunki Lim, Dae-Won Kim, Sunkyu Hwang, K. Yoon","doi":"10.1145/3005358.3005366","DOIUrl":"https://doi.org/10.1145/3005358.3005366","url":null,"abstract":"People experience various emotions when they interact with artistic content such as music and visual art in the form of paintings. Thus, painters and composers use features in music and paintings to influence people emotionally. An analysis of methods employed to create features to influence people using paintings and music indicated that people apparently do not find it difficult to understand artistic content. When people view paintings, listening to music that creates a mood similar to that portrayed by the paintings could be helpful to understand the painter's intention. In this work, we extract the emotions from music and paintings depending on their features. Based on these extracted emotions, the proposed system suggests the most appropriate music to accompany a given image, and vice versa. In addition, based on our algorithm, we developed a mobile application that could assist people to enjoy music and paintings emotionally.","PeriodicalId":242138,"journal":{"name":"SIGGRAPH ASIA 2016 Technical Briefs","volume":"911 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132665039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信