Chengzhi Tao, Jie Guo, Chen Gong, Beibei Wang, Yanwen Guo
{"title":"Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines","authors":"Chengzhi Tao, Jie Guo, Chen Gong, Beibei Wang, Yanwen Guo","doi":"10.2312/PG.20211380","DOIUrl":"https://doi.org/10.2312/PG.20211380","url":null,"abstract":"We present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting. CCS Concepts • Computing methodologies → Reflectance modeling;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"30 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81479488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lohit Petikam, Andrew Chalmers, K. Anjyo, Taehyun Rhee
{"title":"Art-directing Appearance using an Environment Map Latent Space","authors":"Lohit Petikam, Andrew Chalmers, K. Anjyo, Taehyun Rhee","doi":"10.2312/PG.20211386","DOIUrl":"https://doi.org/10.2312/PG.20211386","url":null,"abstract":"In look development, environment maps (EMs) are used to verify 3D appearance in varied lighting (e.g., overcast, sunny, and indoor). Artists can only assign one fixed material, making it laborious to edit appearance uniquely for all EMs. Artists can artdirect material and lighting in film post-production. However, this is impossible in dynamic real-time games and live augmented reality (AR), where environment lighting is unpredictable. We present a new workflow to customize appearance variation across a wide range of EM lighting, for live applications. Appearance edits can be predefined, and then automatically adapted to environment lighting changes. We achieve this by learning a novel 2D latent space of varied EM lighting. The latent space lets artists browse EMs in a semantically meaningful 2D view. For different EMs, artists can paint different material and lighting parameter values directly on the latent space. We robustly encode new EMs into the same space, for automatic look-up of the desired appearance. This solves a new problem of preserving art-direction in live applications, without any artist intervention. CCS Concepts • Computing methodologies → Dimensionality reduction and manifold learning; Rendering;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"35 1","pages":"43-48"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86695933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Zeng, Ju Dai, Junxuan Bai, Junjun Pan, Hong Qin
{"title":"Human Motion Synthesis and Control via Contextual Manifold Embedding","authors":"Rui Zeng, Ju Dai, Junxuan Bai, Junjun Pan, Hong Qin","doi":"10.2312/PG.20211383","DOIUrl":"https://doi.org/10.2312/PG.20211383","url":null,"abstract":"Modeling motion dynamics for precise and rapid control by deterministic data-driven models is challenging due to the natural randomness of human motion. To address it, we propose a novel framework for continuous motion control by probabilistic latent variable models. The control is implemented by recurrently querying between historical and target motion states rather than exact motion data. Our model takes a conditional encoder-decoder form in two stages. Firstly, we utilize Gaussian Process Latent Variable Model (GPLVM) to project motion poses to a compact latent manifold. Motion states could be clearly recognized by analyzing on the manifold, such as walking phase and forwarding velocity. Secondly, taking manifold as prior, a Recurrent Neural Network (RNN) encoder makes temporal latent prediction from the previous and control states. An attention module then morphs the prediction by measuring latent similarities to control states and predicted states, thus dynamically preserving contextual consistency. In the end, the GP decoder reconstructs motion states back to motion frames. Experiments on walking datasets show that our model is able to maintain motion states autoregressively while performing rapid and smooth transitions for the control. CCS Concepts • Computing methodologies → Motion processing; Motion capture; Motion path planning; Learning latent representations;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"10 3 1","pages":"25-30"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73673739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SM-NET: Reconstructing 3D Structured Mesh Models from Single Real-World Image","authors":"Yue Yu, Ying Li, Jingyi Zhang, Yue Yang","doi":"10.2312/PG.20211388","DOIUrl":"https://doi.org/10.2312/PG.20211388","url":null,"abstract":"Image-based 3D structured model reconstruction enables the network to learn the missing information between the dimensions and understand the structure of the 3D model. In this paper, SM-NET is proposed in order to reconstruct 3D structured mesh model based on single real-world image. First, it considers the model as a sequence of parts and designs a shape autoencoder to autoencode 3D model. Second, the network extracts 2.5D information from the real-world image and maps it to the latent space of the shape autoencoder. Finally, both are connected to complete the reconstruction task. Besides, a more reasonable 3D structured model dataset is built to enhance the effect of reconstruction. The experimental results show that we achieve the reconstruction of 3D structured mesh model based on single real-world image, outperforming other approaches.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"37 1","pages":"55-60"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82454422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yao Li, Xianggang Yu, Xiaoguang Han, Nianjuan Jiang, K. Jia, Jiangbo Lu
{"title":"A deep learning based interactive sketching system for fashion images design","authors":"Yao Li, Xianggang Yu, Xiaoguang Han, Nianjuan Jiang, K. Jia, Jiangbo Lu","doi":"10.2312/PG.20201224","DOIUrl":"https://doi.org/10.2312/PG.20201224","url":null,"abstract":"In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"125 1","pages":"13-18"},"PeriodicalIF":0.0,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79025771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Landmarks for Near-Optimal Pathfinding on the CPU and GPU","authors":"M. Reischl, Christian Knauer, M. Guthe","doi":"10.2312/pg.20201228","DOIUrl":"https://doi.org/10.2312/pg.20201228","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"1 1","pages":"37-42"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88299927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiazhou Chen, Xinding Zhu, P. Bénard, Pascal Barla
{"title":"Stroke Synthesis for Inbetweening of Rough Line Animations","authors":"Jiazhou Chen, Xinding Zhu, P. Bénard, Pascal Barla","doi":"10.2312/pg.20201233","DOIUrl":"https://doi.org/10.2312/pg.20201233","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"40 1","pages":"51-52"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74627227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simple Simulation of Curved Folds Based on Ruling-aware Triangulation","authors":"Kosuke Sasaki, J. Mitani","doi":"10.2312/pg.20201227","DOIUrl":"https://doi.org/10.2312/pg.20201227","url":null,"abstract":"Folding a thin sheet material such as paper along curves creates a developable surface composed of ruled surface patches. When using such surfaces in design, designers often repeat a process of folding along curves drawn on a sheet and checking the folded shape. Although several methods for constructing such shapes on a computer have been proposed, it is still difficult to check the folded shapes instantly from the crease patterns.In this paper, we propose a simple method that approximately realizes a simulation of curved folds with a triangular mesh from its crease pattern. The proposed method first approximates curves in a crease pattern with polylines and then generates a triangular mesh. In order to construct the discretized developable surface, the edges in the mesh are rearranged so that they align with the estimated rulings. The proposed method is characterized by its simplicity and is implemented on an existing origami simulator that runs in a web browser. CCS Concepts • Computing methodologies → Mesh models; Mesh geometry models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"66 1","pages":"31-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90073961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monocular 3D Fluid Volume Reconstruction Based on a Multilayer External Force Guiding Model","authors":"Zhiyuan Su, Xiaoying Nie, Xukun Shen, Yong Hu","doi":"10.2312/pg.20201225","DOIUrl":"https://doi.org/10.2312/pg.20201225","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"34 1","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77566167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Day-to-Night Road Scene Image Translation Using Semantic Segmentation","authors":"S. Baek, Sungkil Lee","doi":"10.2312/pg.20201231","DOIUrl":"https://doi.org/10.2312/pg.20201231","url":null,"abstract":"We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"203 1","pages":"47-48"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77018563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}