Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

筛选
英文 中文
Adaptive Measurement of Anisotropic Material Appearance 各向异性材料外观的自适应测量
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2017-01-01 DOI: 10.2312/PG.20171316
R. Vávra, J. Filip
{"title":"Adaptive Measurement of Anisotropic Material Appearance","authors":"R. Vávra, J. Filip","doi":"10.2312/PG.20171316","DOIUrl":"https://doi.org/10.2312/PG.20171316","url":null,"abstract":"We present a practical adaptive method for acquisition of the anisotropic BRDF. It is based on a sparse adaptive measurement of the complete four-dimensional BRDF space by means of one-dimensional slices which form a sparse four-dimensional structure in the BRDF space and which can be measured by continuous movements of a light source and a sensor. Such a sampling approach is advantageous especially for gonioreflectometer-based measurement devices where the mechanical travel of a light source and a sensor creates a significant time constraint. In order to evaluate our method, we perform adaptive measurements of three materials and we simulate adaptive measurements of ten others. We achieve a four-times lower reconstruction error in comparison with the regular non-adaptive BRDF measurements given the same count of measured samples. Our method is almost twice better than a previous adaptive method, and it requires from twoto five-times less samples to achieve the same results as alternative approaches.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86889461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Gas Condensation Simulation with SPH based on Heat Transfer 基于传热的SPH鲁棒气体冷凝模拟
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2017-01-01 DOI: 10.2312/PG.20171321
Tai-you Zhang, Jiajun Shi, Changbo Wang, Hong Qin, Chen Li
{"title":"Robust Gas Condensation Simulation with SPH based on Heat Transfer","authors":"Tai-you Zhang, Jiajun Shi, Changbo Wang, Hong Qin, Chen Li","doi":"10.2312/PG.20171321","DOIUrl":"https://doi.org/10.2312/PG.20171321","url":null,"abstract":"Most simulation of natural phenomena in graphics are physically based, oftentimes involving heat transfer, phase transition, environmental constraints, and/or a combination of the above. At the numerical level, the particle-based schemes (e.g., smooth particle hydrodynamics (SPH)) have proved to preserve subtle details while accommodating large quantity of particles and enabling complex interaction during heat transition. In this paper, we propose a novel hybrid complementary framework to faithfully model intricate details in vapor condensation while circumventing disadvantages of the existing methods. The phase transition is governed by robust heat transfer and dynamic characteristic of condensation, so that the condensed drop is precisely simulated by way of the SPH model. We introduce the dew point to ensure faithful visual simulation, as the atmospheric pressure and the relative humidity were isolated from condensation. Moreover, we design a equivalent substitution for ambient impacts to correct the heat transfer across the boundary layer and reduce the quantity of air particles being utilized. To generate plausible high-resolution visual effects, we extend the standard height map with more physical control and construct arbitrary shape of surface via the reproduction on normal map. We demonstrate the advantages of our framework in several fluid scenes, including vapor condensation on a mirror and some more plausible contrasts.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"202 1","pages":"27-32"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74543302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive Multicut Video Segmentation 交互式多路视频分割
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161332
Evgeny Levinkov, J. Tompkin, Nicolas Bonneel, Steffen Kirchhoff, Bjoern Andres, H. Pfister
{"title":"Interactive Multicut Video Segmentation","authors":"Evgeny Levinkov, J. Tompkin, Nicolas Bonneel, Steffen Kirchhoff, Bjoern Andres, H. Pfister","doi":"10.2312/PG.20161332","DOIUrl":"https://doi.org/10.2312/PG.20161332","url":null,"abstract":"Video segmentation requires separating foreground from background, but the general problem extends to more complicated scene segmentations of different objects and their multiple parts. We develop a new approach to interactive multi-label video segmentation where many objects are segmented simultaneously with consistent spatio-temporal boundaries, based on intuitive multi-colored brush scribbles. From these scribbles, we derive constraints to define a combinatorial problem known as the multicut---a problem notoriously difficult and slow to solve. We describe a solution using efficient heuristics to make multi-label video segmentation interactive. As our solution generalizes typical binary segmentation tasks, while also improving efficiency in multi-label tasks, our work shows the promise of multicuts for interactive video segmentation.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"20 1","pages":"33-38"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79585576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Local Detail Enhancement for Volume Rendering under Global Illumination 局部细节增强在全局光照下的体绘制
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161334
Jinta Zheng, Tianjin Zhang, J. Qin
{"title":"Local Detail Enhancement for Volume Rendering under Global Illumination","authors":"Jinta Zheng, Tianjin Zhang, J. Qin","doi":"10.2312/PG.20161334","DOIUrl":"https://doi.org/10.2312/PG.20161334","url":null,"abstract":"We present a novel method for realistic perception enhanced volume rendering. Compared with traditional lighting systems, that either tend to eliminate important local shapes and details in volume data or cannot offer interactive global illumination, our method can enhance the edges and curvatures within a volume under global illumination through a user-friendly interface. We first propose an interactive volumetric lighting model to both simulate scattering and enhance the local detail information. In this model, users only need to determine a key light source. Next, we propose a new cue to intensify the shape perception by enhancing the local edges and details. The cue can be pre-computed and thus we can still keep the rendering process running real-time. Experiments on a variety of volume data demonstrate that the proposed method can generate more details, and hence more realistic rendering results.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"33 1","pages":"45-50"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85820622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compressing Bidirectional Texture Functions via Tensor Train Decomposition 基于张量列分解的双向纹理函数压缩
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161329
R. Ballester-Ripoll, R. Pajarola
{"title":"Compressing Bidirectional Texture Functions via Tensor Train Decomposition","authors":"R. Ballester-Ripoll, R. Pajarola","doi":"10.2312/PG.20161329","DOIUrl":"https://doi.org/10.2312/PG.20161329","url":null,"abstract":"Material reflectance properties play a central role in photorealistic rendering. Bidirectional texture functions (BTFs) can faithfully represent these complex properties, but their inherent high dimensionality (texture coordinates, color channels, view and illumination spatial directions) requires many coefficients to encode. Numerous algorithms based on tensor decomposition have been proposed for efficient compression of multidimensional BTF arrays, however, these prior methods still grow exponentially in size with the number of dimensions. We tackle the BTF compression problem with a different model, the tensor train (TT) decomposition. The main difference is that TT compression scales linearly with the input dimensionality and is thus much better suited for high-dimensional data tensors. Furthermore, it allows faster random-access texel reconstruction than the previous Tucker-based approaches. We demonstrate the performance benefits of the TT decomposition in terms of accuracy and visual appearance, compression rate and reconstruction speed.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"35 1","pages":"19-22"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85014326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Viewpoint Selection for Taking a good Photograph of Architecture 拍摄好建筑照片的视点选择
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161333
Jingwu He, Wen-Ji Zhou, Linbo Wang, Hongjie Zhang, Yanwen Guo
{"title":"Viewpoint Selection for Taking a good Photograph of Architecture","authors":"Jingwu He, Wen-Ji Zhou, Linbo Wang, Hongjie Zhang, Yanwen Guo","doi":"10.2312/PG.20161333","DOIUrl":"https://doi.org/10.2312/PG.20161333","url":null,"abstract":"This paper studies the problem of how to choose the viewpoint for taking good photographs for architecture. We achieve this by learning from professional photographs of world famous landmarks that are available in the Internet. Unlike the previous efforts devoted to photo quality assessment which mainly rely on visual features, we show in this paper combining visual features with geometric features computed on the 3D models can result in a more reliable evaluation of viewpoint quality. Specifically, we collect a set of photographs for each of 6 world famous architectures as well as their 3D models from Internet. Viewpoint recovery for images is carried out by an image-model registration process, after which a newly proposed viewpoint clustering strategy is exploited to validate users' viewpoint preference when photographing landmarks. Finally, we extract a number of 2D and 3D features for each image based on multiple visual and geometric cues, and perform viewpoint recommendation by learning from both 2D and 3D features, achieving superior performance over using solely 2D or 3D features. We show the effectiveness of the proposed approach through extensive experiments.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"451 1","pages":"39-44"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79704345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Icon Set Selection via Human Computation 图标集选择通过人工计算
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161326
L. Laursen, Yuki Koyama, Hsiang-Ting Chen, Elena Garces, D. Gutierrez, R. Harper, T. Igarashi
{"title":"Icon Set Selection via Human Computation","authors":"L. Laursen, Yuki Koyama, Hsiang-Ting Chen, Elena Garces, D. Gutierrez, R. Harper, T. Igarashi","doi":"10.2312/PG.20161326","DOIUrl":"https://doi.org/10.2312/PG.20161326","url":null,"abstract":"Picking the best icons for a graphical user interface is difficult. We present a new method which, given several icon candidates representing functionality, selects a complete icon set optimized for comprehensibility and identifiability. These two properties are measured using human computation. We apply our method to a domain with a less established iconography and produce several icon sets. To evaluate our method, we conduct a user study comparing these icon sets and a designer-picked set. Our estimated comprehensibility score correlate with the percentage of correctly understood icons, and our method produces an icon set with a higher comprehensibility score than the set picked by an involved icon designer. The estimated identifiability score and related tests did not yield significant findings. Our method is easy to integrate in traditional icon design workflow and is intended for use by both icon designers, and clients of icon designers.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"15 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78574606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Optimized Route for Crowd Evacuation 人群疏散路径优化
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161327
Sai-Keung Wong, Yu-Shuen Wang, P. Tang, Tsung-Yu Tsai
{"title":"Optimized Route for Crowd Evacuation","authors":"Sai-Keung Wong, Yu-Shuen Wang, P. Tang, Tsung-Yu Tsai","doi":"10.2312/PG.20161327","DOIUrl":"https://doi.org/10.2312/PG.20161327","url":null,"abstract":"An evacuation plan helps people move away from an area or a building. To achieve a fast evacuation, we present an algorithm to compute the optimal route for each local region. The idea is to reduce congestion and to maximize the number of evacuees arriving at exits in every time span. Our system considers the crowd distribution, exit locations, and corridor widths when determining the optimal routes. It also simulates crowd movements during the route optimization. To implement this idea, we expect that neighboring crowds who take different evacuation routes should arrive at respective exits nearly at the same time. If this is not the case, our system updates the routes of the slower crowds. Given that crowd simulation is non-linear, the optimal route is computed in an iterative manner. The process repeats until an optimal state is achieved. Experiment results demonstrate the feasibility of our evacuation route optimization.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"107 1","pages":"7-11"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79327076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reflectance and Shape Estimation for Cartoon Shaded Objects 卡通阴影物体的反射率和形状估计
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-10-11 DOI: 10.2312/PG.20161331
Hideki Todo, Yasushi Yamaguchi
{"title":"Reflectance and Shape Estimation for Cartoon Shaded Objects","authors":"Hideki Todo, Yasushi Yamaguchi","doi":"10.2312/PG.20161331","DOIUrl":"https://doi.org/10.2312/PG.20161331","url":null,"abstract":"Although many photorealistic relighting methods provide a way to change the illumination of objects in a digital photograph, it is currently difficult to relight a cartoon shading style in digital illustrations. The main difference between photorealistic and cartoon shading styles is that cartoon shading is characterized by soft color quantization and nonlinear color variations that cause noticeable reconstruction errors under a physical reflectance assumption such as Lambertian. To handle this non-photorealistic shading property, we focus on the shading analysis of the most fundamental cartoon shading technique. Based on its color map shading representation, we propose a simple method to decompose the input shading to a smooth shape with a nonlinear reflectance property. We have conducted simple ground-truth evaluations to compare our results to those obtained by other approaches.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"15 4","pages":"27-32"},"PeriodicalIF":0.0,"publicationDate":"2016-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72586107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRAINING PUBLIC SERVICE INTERPRETERS AND TRANSLATORS IN THE LEGAL CONTEXT 培训法律背景下的公共服务口译员和笔译员
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2016-09-01 DOI: 10.18355/pg.2016.5.2.320-336
Carmen Valero- Garcés
{"title":"TRAINING PUBLIC SERVICE INTERPRETERS AND TRANSLATORS IN THE LEGAL CONTEXT","authors":"Carmen Valero- Garcés","doi":"10.18355/pg.2016.5.2.320-336","DOIUrl":"https://doi.org/10.18355/pg.2016.5.2.320-336","url":null,"abstract":"Formal training and practice in public service interpreting and translating (PSIT), also known as community interpreting, has been sparking growing interest in Translation Studies (TS). For this field to develop, training and practice need to find common ground with society’s needs and the actual professional practice. Based on a case study, the Masters in Intercultural Communication, Interpreting and Translating in Public Services (MICIT) at the University of Alcalá (UAH), Madrid, Spain, an attempt to integrate training, practice and market needs is described and assessed. The MICIT covers translation and interpreting in different settings, although this paper focuses on the legal domain. Data are drawn from the last six academic years (2006 – 2012). The key elements of the existing debate between LIT and PSIT are first introduced, defining some concepts and discussing areas of study as well as the connections between PSIT and LIT; secondly, the postgraduate programme at UAH is described, focusing on LIT modules; thirdly, the integration of internships and research in the programme is further considered and assessed. Results of research within the legal context carried out by postgraduate students are then exposed. While emphasizing the usefulness of research in TS being an integral part of the training of legal interpreters and translators, I will mention some of the challenges that remain.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"50 1","pages":"320-336"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91396868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信