Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

筛选
英文 中文
Progressive 3D Scene Understanding with Stacked Neural Networks 渐进式3D场景理解与堆叠神经网络
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.2312/pg.20181280
Youcheng Song, Zhengxing Sun
{"title":"Progressive 3D Scene Understanding with Stacked Neural Networks","authors":"Youcheng Song, Zhengxing Sun","doi":"10.2312/pg.20181280","DOIUrl":"https://doi.org/10.2312/pg.20181280","url":null,"abstract":"3D scene understanding is difficult due to the natural hierarchical structures and complicated contextual relationships in the 3d scenes. In this paper, a progressive 3D scene understanding method is proposed. The scene understanding task is decomposed into several different but related tasks, and semantic objects are progressively separated from coarse to fine. It is achieved by stacking multiple segmentation networks. The former network segments the 3D scene at a coarser level and passes the result as context to the latter one for a finer-grained segmentation. For the network training, we build a connection graph (vertices indicating objects and edges’ weights indicating contact area between objects), and calculate a maximum spanning tree to generate coarse-to-fine labels. Then we train the stacked network by hierarchical supervision based on the generated coarseto-fine labels. Finally, using the trained model, we can not only obtain better segmentation accuracy at the finest-grained than directly using the segmentation network, but also obtain a hierarchical understanding result of the 3d scene as a bonus. CCS Concepts •Computing methodologies → Scene understanding; Neural networks; Shape representations;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"30 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77141447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spherical Blue Noise 球形蓝噪声
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.2312/pg.20181267
Kin-Ming Wong, T. Wong
{"title":"Spherical Blue Noise","authors":"Kin-Ming Wong, T. Wong","doi":"10.2312/pg.20181267","DOIUrl":"https://doi.org/10.2312/pg.20181267","url":null,"abstract":"We present a physically based method which generates unstructured uniform point set directly on the S2-sphere. Spherical uniform point sets are useful for illumination sampling in Quasi Monte Carlo (QMC) rendering but it is challenging to generate high quality uniform point sets directly. Most methods rely on mapping the low discrepancy unit square point sets to the spherical domain. However, these transformed point sets often exhibit sub-optimal uniformity due to the inability of preserving the low discrepancy properties. Our method is designed specifically for direct generation of uniform point sets in the spherical domain. We name our generated result as Spherical Blue Noise point set because it shares similar point distribution characteristics with the 2D blue noise. Our point sets possess high spatial uniformity without a global structure, and we show that they deliver competitive results for illumination integration in QMC rendering, and general numerical integration on the spherical domain. CCS Concepts •Computing methodologies → Ray tracing;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"3 1","pages":"5-8"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90567644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAVE: Template-based Augmentation of Visual Effects to Human Actions in Videos TAVE:基于模板的视频中人类行为的视觉效果增强
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.14711/thesis-991012636368203412
Jingyuan Liu, Xuren Zhou, Hongbo Fu, Chiew-Lan Tai
{"title":"TAVE: Template-based Augmentation of Visual Effects to Human Actions in Videos","authors":"Jingyuan Liu, Xuren Zhou, Hongbo Fu, Chiew-Lan Tai","doi":"10.14711/thesis-991012636368203412","DOIUrl":"https://doi.org/10.14711/thesis-991012636368203412","url":null,"abstract":"We present TAVE, a framework that allows novice users to add interesting visual effects by mimicking human actions in a given template video, in which pre-defined visual effects have already been associated with specific human actions. Our framework is mainly based on high-level features of human pose extracted from video frames, and uses low-level image features as the auxiliary information. We encode an action into a set of code sequences representing joint motion directions and use a finite state machine to recognize the action state of interest. The visual effects, possibly with occlusion masks, can be automatically transferred from the template video to a target video containing similar human actions.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"135 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73979240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Visual Analytics Approach for Traffic Flow Prediction Ensembles 交通流预测集成的可视化分析方法
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.2312/pg.20181281
Kezhi Kong, Yuxin Ma, Chentao Ye, Junhua Lu, X. Chen, Wei Zhang, Wei Chen
{"title":"A Visual Analytics Approach for Traffic Flow Prediction Ensembles","authors":"Kezhi Kong, Yuxin Ma, Chentao Ye, Junhua Lu, X. Chen, Wei Zhang, Wei Chen","doi":"10.2312/pg.20181281","DOIUrl":"https://doi.org/10.2312/pg.20181281","url":null,"abstract":"Traffic flow prediction plays a significant role in Intelligent Transportation Systems (ITS). Due to the variety of prediction models, the prediction results form an intricate structure of ensembles and hence leave a challenge of understanding and evaluating the ensembles from different perspectives. In this paper, we propose a novel visual analytics approach for analyzing the predicted ensembles. Our approach models the uncertainty of different traffic flow prediction results. The variations of space, time, and network structures of those results are presented with the visualization designs. The visual interface provides a suite of interactions to enhance exploration of the ensembles. With the system, analysts can discover some intrinsic patterns in the ensemble. We use real-world urban traffic data to demonstrate the effectiveness of our system. CCS Concepts •Human-centered computing → Visual analytic;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"55 1","pages":"61-64"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78112159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mesh Parameterization: a Viewpoint from Constant Mean Curvature Surfaces 网格参数化:常平均曲率曲面的观点
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.2312/PG.20181272
Hui Zhao, Kehua Su, Chenchen Li, Boyu Zhang, Shi Liu, Lei Yang, Na Lei, S. Gortler, X. Gu
{"title":"Mesh Parameterization: a Viewpoint from Constant Mean Curvature Surfaces","authors":"Hui Zhao, Kehua Su, Chenchen Li, Boyu Zhang, Shi Liu, Lei Yang, Na Lei, S. Gortler, X. Gu","doi":"10.2312/PG.20181272","DOIUrl":"https://doi.org/10.2312/PG.20181272","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"48 1","pages":"25-28"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78645808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D VAE-Attention Network: A Parallel System for Single-view 3D Reconstruction 三维vee -注意力网络:一种用于单视图三维重建的并行系统
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.2312/PG.20181279
Fei Hu, Xinyan Yang, Wei Zhong, Long Ye, Qin Zhang
{"title":"3D VAE-Attention Network: A Parallel System for Single-view 3D Reconstruction","authors":"Fei Hu, Xinyan Yang, Wei Zhong, Long Ye, Qin Zhang","doi":"10.2312/PG.20181279","DOIUrl":"https://doi.org/10.2312/PG.20181279","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"147 1","pages":"53-56"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74144801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Light-Field DVR on GPU for Streaming Time-Varying Data 基于GPU的时变数据流光场DVR
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2018-01-01 DOI: 10.2312/PG.20181283
D. Ganter, Martin Alain, David J. Hardman, A. Smolic, M. Manzke
{"title":"Light-Field DVR on GPU for Streaming Time-Varying Data","authors":"D. Ganter, Martin Alain, David J. Hardman, A. Smolic, M. Manzke","doi":"10.2312/PG.20181283","DOIUrl":"https://doi.org/10.2312/PG.20181283","url":null,"abstract":"Direct Volume Rendering (DVR) of volume data can be a memory intensive task in terms of footprint and cache-coherency. Rayguided methods may not be the best option to interactively render to light-fields due to feedback loops and sporadic sampling, and pre-computation can rule out time-varying data. We present a pipelined approach to schedule the rendering of sub-regions of streaming time-varying volume data while minimising intermediate sub-buffers needed, sharing the work load between CPU and GPU. We show there is significant advantage to using such an approach. CCS Concepts •Computing methodologies → Rendering; Parallel algorithms; Graphics systems and interfaces;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"49 1","pages":"69-72"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83510443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Shadow Removal using a Volumetric Skeleton Model in a Front Projection System 在正面投影系统中使用体积骨架模型的实时阴影去除
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2017-10-17 DOI: 10.2312/pg.20171318
Jaedong Kim, Hyunggoog Seo, Seunghoon Cha, Jun-yong Noh
{"title":"Real-time Shadow Removal using a Volumetric Skeleton Model in a Front Projection System","authors":"Jaedong Kim, Hyunggoog Seo, Seunghoon Cha, Jun-yong Noh","doi":"10.2312/pg.20171318","DOIUrl":"https://doi.org/10.2312/pg.20171318","url":null,"abstract":"When a person is located between a display and a projector in operation, a shadow is cast on the display. The shadow on a display may eliminate important visual information and therefore adversely affects the viewing experiences. There have been various attempts to remove shadows cast on a projection display by using multiple projectors. We propose a real time novel approach to removing shadow cast by the person who dynamically interacts with the display making limb motions in a front projection system. The proposed method utilizes a human skeleton obtained from a depth camera to track the posture of the person which changes over time. A model that consists of spheres and conical frustums is constructed based on the skeleton information in order to represent volumetric information of the person being tracked. Our method precisely estimates the shadow region by projecting the volumetric model onto the display. In addition, employment of intensity masks that is based on a distance field helps suppress the afterimage of shadow that appears when the person moves abrubtly and smooth the difference of the brightness caused by different projectors at the boundary of the shadow region.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"34 1","pages":"13-16"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78342171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computing Restricted Voronoi Diagram on Graphics Hardware 在图形硬件上计算受限Voronoi图
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2017-01-01 DOI: 10.2312/PG.20171320
Jiawei Han, Dong‐Ming Yan, Lili Wang, Qinping Zhao
{"title":"Computing Restricted Voronoi Diagram on Graphics Hardware","authors":"Jiawei Han, Dong‐Ming Yan, Lili Wang, Qinping Zhao","doi":"10.2312/PG.20171320","DOIUrl":"https://doi.org/10.2312/PG.20171320","url":null,"abstract":"The 3D restricted Voronoi diagram (RVD), defined as the intersection of the 3D Voronoi diagram of a pointset with a mesh surface, has many applications in geometry processing. There exist several CPU algorithms for computing RVDs. However, such algorithms still cannot compute RVDs in realtime. In this short paper, we propose an efficient algorithm for computing RVDs on graphics hardware. We demonstrate the robustness and the efficiency of the proposed GPU algorithm by applying it to surface remeshing based on centroidal Voronoi tessellation.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"56 1","pages":"23-26"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78623249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Transferring Pose and Augmenting Background Variation for Deep Human Image Parsing 基于姿态变换和增强背景变化的深度人体图像分析
Proceedings. Pacific Conference on Computer Graphics and Applications Pub Date : 2017-01-01 DOI: 10.2312/PG.20171317
Takazumi Kikuchi, Yuki Endo, Yoshihiro Kanamori, Taisuke Hashimoto, J. Mitani
{"title":"Transferring Pose and Augmenting Background Variation for Deep Human Image Parsing","authors":"Takazumi Kikuchi, Yuki Endo, Yoshihiro Kanamori, Taisuke Hashimoto, J. Mitani","doi":"10.2312/PG.20171317","DOIUrl":"https://doi.org/10.2312/PG.20171317","url":null,"abstract":"Human parsing is a fundamental task to estimate semantic parts in a human image such as face, arm, leg, hat, and dress. Recent deep-learning based methods have achieved significant improvements, but collecting training datasets of pixel-wise annotations is labor-intensive. In this paper, we propose two solutions to cope with limited dataset. First, to handle various poses, we incorporate a pose estimation network into an end-to-end human parsing network in order to transfer common features across the domains. The pose estimation network can be trained using rich datasets and feed valuable features to the human parsing network. Second, to handle complicated backgrounds, we increase the variations of background images automatically by replacing the original backgrounds of human images with those obtained from large-scale scenery image datasets. While each of the two solutions is versatile and beneficial to human parsing, their combination yields further improvement. CCS Concepts •Computing methodologies → Image segmentation; Image processing;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"24 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86828206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信