ACM SIGGRAPH 2019 Posters最新文献

筛选
英文 中文
A data-driven compression method for transient rendering 一种用于瞬态呈现的数据驱动压缩方法
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338582
Yun Liang, Mingqin Chen, Zesheng Huang, D. Gutierrez, A. Muñoz, Julio Marco
{"title":"A data-driven compression method for transient rendering","authors":"Yun Liang, Mingqin Chen, Zesheng Huang, D. Gutierrez, A. Muñoz, Julio Marco","doi":"10.1145/3306214.3338582","DOIUrl":"https://doi.org/10.1145/3306214.3338582","url":null,"abstract":"Monte Carlo methods for transient rendering have become a powerful instrument to generate reliable data in transient imaging applications, either for benchmarking, analysis, or as a source for data-driven approaches. However, due to the increased dimensionality of time-resolved renders, storage and data bandwidth are significant limiting constraints, where a single time-resolved render of a scene can take several hundreds of megabytes. In this work we propose a learning-based approach that makes use of deep encoder-decoder architectures to learn lower-dimensional feature vectors of time-resolved pixels. We demonstrate how our method is capable of compressing transient renders up to a factor of 32, and recover the full transient profile making use of a decoder. Additionally, we show how our learned features significantly mitigate variance on the recovered signal, addressing one of the pathological problems in transient rendering.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115597443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D printing for mixed reality hands-on museum exhibit interaction 3D打印用于混合现实动手博物馆展览互动
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338609
Laura Mann, O. Fryazinov
{"title":"3D printing for mixed reality hands-on museum exhibit interaction","authors":"Laura Mann, O. Fryazinov","doi":"10.1145/3306214.3338609","DOIUrl":"https://doi.org/10.1145/3306214.3338609","url":null,"abstract":"This work presents a combination of 3D printing with mixed reality to use the results in the context of museum exhibitions or for cultural heritage. While now priceless artefacts are encased in glass, kept safe and out of reach of the visitors, we present a new pipeline which would allow visitors hands-on interaction with realistic 3D printed replicas of the artefacts which are then digitally augmented to have the genuine artefacts' appearances.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126715075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Partial zoom on small display for people suffering from presbyopia 为老花眼患者提供部分变焦的小显示器
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338581
Huiyi Fang, Kenji Funahashi, S. Mizuno, Y. Iwahori
{"title":"Partial zoom on small display for people suffering from presbyopia","authors":"Huiyi Fang, Kenji Funahashi, S. Mizuno, Y. Iwahori","doi":"10.1145/3306214.3338581","DOIUrl":"https://doi.org/10.1145/3306214.3338581","url":null,"abstract":"When presbyopic people use digital devices, they often zoom in the display, because it is not in focus when they move it close to face. We have proposed the automatic display zoom system for presbyopic people [Fang and Funahashi 2018]. However, some of the information on the small display has gone out of it after zooming-in (Fig. 2(a) to (b)). It is necessary to scroll it frequently, and a bother. On the other hand, a conventional partial zoom means like a magnifying glass is also usually provided (Fig. 2(a) to (c)). The part around a zoomed area is cut off, and it is necessary to move the glass frequently too. People sometimes want to skim through sentences and understand an overview. By the way, although it is difficult to read blurry words (Fig. 3(a)), you can guess and read a sentence includes the blurry words when some other words are clear (Fig. 3(b)). Therefore, we reconsider the zoom-in method for presbyopic people. For example, the area paid attention is zoomed in to read clearly, and the magnification rate of the area around it is gradually reduced to zoom-out rate so that all information is displayed in the small display even though some words are zoomed out. It is expected that you can guess and read also the unzoomed-in words like blurred words around the clear zoomed-in words. We propose a suitable partial zoom-in function that allows you to skim a document.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123006682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph matching based anime colorization with multiple references 基于多引用的图形匹配动画着色
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338560
Akinobu Maejima, Hiroyuki Kubo, Takuya Funatomi, T. Yotsukura, Satoshi Nakamura, Y. Mukaigawa
{"title":"Graph matching based anime colorization with multiple references","authors":"Akinobu Maejima, Hiroyuki Kubo, Takuya Funatomi, T. Yotsukura, Satoshi Nakamura, Y. Mukaigawa","doi":"10.1145/3306214.3338560","DOIUrl":"https://doi.org/10.1145/3306214.3338560","url":null,"abstract":"We propose a graph matching-based anime-colorization method from line drawings using multiple reference images. A graph structure of each frame in an input line drawing sequence helps to find correspondences of regions to be colorized between frames. However, it is difficult to find precise correspondences of whole frames only from a single reference image, because the graph structure tends to change drastically during the sequence. Therefore, our method first finds an optimal image from multiple reference images according to a cost function that represents shape similarity between nodes and compatibility of node pairs. While it is necessary to prepare several manually colored reference images, our method is still effective in reducing the effort required for colorization in anime production. We demonstrate the effectiveness of our method using actual images from our production.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"48 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114123964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Biodigital: transform data to experience, beyond data visualization Biodigital:将数据转化为体验,超越数据可视化
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338542
Takahito Ito, César A. Hidalgo
{"title":"Biodigital: transform data to experience, beyond data visualization","authors":"Takahito Ito, César A. Hidalgo","doi":"10.1145/3306214.3338542","DOIUrl":"https://doi.org/10.1145/3306214.3338542","url":null,"abstract":"\"Biodigital\" is a Sci-Fi interactive story set in the year 2117 that combines VR film, immersive 3D environments, and VR data visualization. It turns data into a cinematic experience where a user is enmeshed as a character in the story. This VR storytelling tells the tale of humanity a hundred years from now. It also encourages us to think \"How should we live in the future?\"","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126845799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Delaunay lofts: a new class of space-filling shapes 德劳内阁楼:一种新的填充空间的形状
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338576
S. Subramanian, M. Eng, Vinayak R. Krishnamurthy, E. Akleman
{"title":"Delaunay lofts: a new class of space-filling shapes","authors":"S. Subramanian, M. Eng, Vinayak R. Krishnamurthy, E. Akleman","doi":"10.1145/3306214.3338576","DOIUrl":"https://doi.org/10.1145/3306214.3338576","url":null,"abstract":"We have developed an approach to construct and design a new class of space-filling shapes, which we call Delaunay Lofts. Our approach is based on interpolation of a stack of planar tiles whose dual tilings are Delaunay diagrams. We construct control curves that interpolate Delaunay vertices. Voronoi decomposition of the volume using these control curves as Voronoi sites gives us lofted interpolation of original polygons in planar tiles. This, combined with the use of wallpaper symmetries allows for the design of space-filling shapes in 3-space. In the poster exhibition, we will also demonstrate 3D printed examples of the new class of shapes (See Figures 1 and 3).","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129105600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VFX fractal toolkit: integrating fractals into VFX pipeline VFX分形工具包:整合分形到VFX管道
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338543
Juraj Tomori
{"title":"VFX fractal toolkit: integrating fractals into VFX pipeline","authors":"Juraj Tomori","doi":"10.1145/3306214.3338543","DOIUrl":"https://doi.org/10.1145/3306214.3338543","url":null,"abstract":"This paper proposes an innovative industry practice regarding fractal geometry generating and rendering processes. VFX Fractal Toolkit (VFT) aims to provide powerful, yet intuitive and artist-friendly workflows for exploration and generating of vast amounts of fractals. VFT allows for node-based description of fractals implemented in SideFX Houdini. VFT is built specifically for Visual Effects (VFX) pipelines and employs standard practices. It aims to provide artists with a toolset which would help them explore fractal forms of generative art directly in VFX applications.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121496940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IR surface reflectance estimation and material type recognition using two-stream net and kinect camera 基于双流网络和kinect相机的红外表面反射率估计和材料类型识别
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338557
Seok-Kun Lee, Hwasup Lim, S. Ahn, Seungkyu Lee
{"title":"IR surface reflectance estimation and material type recognition using two-stream net and kinect camera","authors":"Seok-Kun Lee, Hwasup Lim, S. Ahn, Seungkyu Lee","doi":"10.1145/3306214.3338557","DOIUrl":"https://doi.org/10.1145/3306214.3338557","url":null,"abstract":"Recently, material type recognition using color or light field camera has been studied. However, visual pattern based approaches for material type recognition without direct acquisition of surface reflectance show limited performance. In this work, we propose IR surface reflectance estimation using off-the-shelf ToF (Time-of-Flight) active sensor such as Kinect and perform surface material type recognition based on both color and reflectance clues. Two stream deep neural network consists of convolutional neural network encoding visual clue and recurrent neural network encoding reflectance characteristic is proposed for material classification. Estimated IR surface reflectance and material type recognition evaluation on our Color-IR Material Data set show promising performance compared to prior approaches.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115786837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Practical measurement and modeling of spectral skin reflectance 光谱皮肤反射率的实际测量和建模
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338607
Y. Gitlina, D. S. Dhillon, G. Guarnera, A. Ghosh
{"title":"Practical measurement and modeling of spectral skin reflectance","authors":"Y. Gitlina, D. S. Dhillon, G. Guarnera, A. Ghosh","doi":"10.1145/3306214.3338607","DOIUrl":"https://doi.org/10.1145/3306214.3338607","url":null,"abstract":"Accurate modeling and rendering of human skin appearance has been a long standing goal in computer graphics. Of particular importance has been the realistic modeling and rendering of layered subsurface scattering in skin for which various bio-physical models have been proposed based on the spectral distribution of chromophores in the epidermal and dermal layers of skin [Donner and Jensen 2006; Donner et al. 2008; Jimenez et al. 2010]. However, measurement of the spectral parameters of absorption and scattering of light for such bio-phyisical models has been a challenge in computer graphics. Previous works have either borrowed parameters for skin-type from tissue-optics literature [Donner and Jensen 2006], or employed extensive multispectral imaging for inverse rendering detailed spatially varying parameters for a patch of skin [Donner et al. 2008]. Closest to our approach, Jimenez et al. [2010] employed observations under uniform broadband illumination to estimate two dominant parameters (melanin and hemoglobin concentrations) for driving a qualitative appearance model for facial animation.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129964812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time structure aware color stippling 实时结构感知颜色点画
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338606
Lei Ma, Hong Deng, Beibei Wang, Yanyun Chen, T. Boubekeur
{"title":"Real-time structure aware color stippling","authors":"Lei Ma, Hong Deng, Beibei Wang, Yanyun Chen, T. Boubekeur","doi":"10.1145/3306214.3338606","DOIUrl":"https://doi.org/10.1145/3306214.3338606","url":null,"abstract":"In computer graphics, stippling is a widely used non-photorealistic rendering technique. As the art of representing images with dots, one of the key problems is the placement of dots. In general, they should be distributed evenly, and with some randomness at the same time. Blue noise methods provide these characteristics and are used by state-of-the-art gray-scale algorithms to distribute dots. Color stippling, however, is more challenging as each channel should have even distribution at the same time. Existing approaches cast color stippling as a multi-class blue noise sampling problem and provide high quality results at the cost of a very long processing time. In this paper, we propose a real-time structure aware method for color stippling, based on samples generated from an incremental voronoi set. Our method can handle an arbitrary input color vector for stippling and produce significantly better results than previous methods, at real-time frame rate. We evaluate the perceptual quality of our stippling with a user study and its numerical performance by measuring the MSE between the reconstructed image from the stippling and the input image. As a result, the real time performance of our method makes interactive stippling editing possible, providing the artist with an effective tool to explore quickly a wide space of color image stippling.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134057414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信