{"title":"On-site example-based material appearance digitization","authors":"Yiming Lin, P. Peers, A. Ghosh","doi":"10.1145/3230744.3230805","DOIUrl":"https://doi.org/10.1145/3230744.3230805","url":null,"abstract":"We present a novel example-based material appearance modeling method for digital content creation. The proposed method requires a single HDR photograph of an exemplar object made of a desired material under known environmental illumination. While conventional methods for appearance modeling require the object shape to be known, our method does not require prior knowledge of the shape of the exemplar, nor does it require recovering the shape, which improves robustness as well as simplify on-site appearance acquisition by non-expert users.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127520790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to move in crowd","authors":"Jaedong Lee, Jehee Lee","doi":"10.1145/3230744.3230782","DOIUrl":"https://doi.org/10.1145/3230744.3230782","url":null,"abstract":"The main goal of the crowd simulation is to generate realistic movements of agents. Reproducing the mechanism that seeing the environments, understanding current situation, and deciding where to step is crucial point to simulating crowd movements. We formulate the process of walking mechanism using deep reinforcement learning. And we experiment some typical scenarios.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133407809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lighting condition adaptive tone mapping method","authors":"Jiangyan Han, I. R. Khan, S. Rahardja","doi":"10.1145/3230744.3230773","DOIUrl":"https://doi.org/10.1145/3230744.3230773","url":null,"abstract":"We propose an adaptive tone mapping method for displaying HDR images according to ambient light conditions. To compensate the loss of perceived luminance in brighter viewing conditions, we enhance the HDR image by an algorithm based on the Naka-Rushton model. Changes of the HVS response under different adaptation levels are considered and we match the response under the ambient conditions with the plateau response to the original HDR scene. The enhanced HDR image is tone mapped through a tone mapping curve constructed by the original image luminance histogram to produce visually pleasing images under given viewing conditions.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133886206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design method of digitally fabricated spring glass pen","authors":"Kohei Ogawa, Kengo Tanaka, Tatsuya Minagawa, Yoichi Ochiai","doi":"10.1145/3230744.3230809","DOIUrl":"https://doi.org/10.1145/3230744.3230809","url":null,"abstract":"In this study, We propose a method to develop a spring glass dip pen by using a 3D printer and reproduce different types of writing feeling. There have been several studies on different types of pens to change the feel of writing. For example, EV-Pen [Wang et al. 2016] and haptics pens [Lee et al. 2004] changes the feel of pen writing with using vibration. However, our proposed method does not reproduce tactile sensation of softness by using vibrations.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134330690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Practical acquisition and rendering of common spatially varying holographic surfaces","authors":"Antoine Toisoul, D. S. Dhillon, A. Ghosh","doi":"10.1145/3230744.3230815","DOIUrl":"https://doi.org/10.1145/3230744.3230815","url":null,"abstract":"We present a novel approach to measure the appearance of commonly found spatially varying holographic surfaces. Such surfaces are made of one dimensional diffraction gratings that vary in orientations and periodicities over a sample to create impressive visual effects. Our method is able to recover the orientation and periodicity maps simply using a flash illumination and a DSLR camera. We present real-time renderings under environmental illumination using the measured maps that match the observed appearance.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129971017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic display zoom for people suffering from Presbyopia","authors":"Huiyi Fang, Kenji Funahashi","doi":"10.1145/3230744.3230754","DOIUrl":"https://doi.org/10.1145/3230744.3230754","url":null,"abstract":"Human eyes have an adjustment function to adjust for different distances of seeing. However, it becomes weaker as you get older. When you move paper closer to read small letters, it is not in focus. When you move it away to bring it into focus, it is too small to read. This condition is called Presbyopia. People suffering from presbyopia also suffer from this condition when they use a smartphone or tablet. Although they can magnify the display using the pinch operation, it is a bother. A method for automatic display zoom, to see detail and an overview, was proposed in [Satake et al. 2016]. This method measures the distance between a face and a screen to judge whether you want to see detail or an overview. When you move it close to your face, it judges you want to see detail and zooms in. When you move it away from your face, it judges that you want to see overview and zooms out. In this paper, we improve and apply this method for presbyopia. First we observe and analyze the behavior of presbyopic people when trying to read small letters. Then we propose a suitable zooming function, for example, a screen is zoomed in also when it is moved away if the person suffers from presbyopia.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130169688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Badias, I. Alfaro, D. González, F. Chinesta, E. Cueto
{"title":"Improving the realism of mixed reality through physical simulation","authors":"A. Badias, I. Alfaro, D. González, F. Chinesta, E. Cueto","doi":"10.1145/3230744.3230775","DOIUrl":"https://doi.org/10.1145/3230744.3230775","url":null,"abstract":"We present a new way of adding augmented information based on the computation of the physical equations that truly govern the behavior of objects. In computer graphics, it is common to use big simplifications to be able to solve this type of equations in real time, obtaining in many occasions behaviors that differ remarkably from reality. However, using model order reduction (MOR) techniques we are able to pre-compute a parametric solution that is only evaluated in the visualization stage, greatly reducing the computation time in this on-line phase. We present also several examples that support our method, showing computational fluid dynamics (CFD) examples and deformable solids with nonlinear material behaviors. Since it is a mixed-reality implementation, we decided to create an interactive poster that allows the visualization of augmented reality videos using augmented reality techniques, what we call (AR)2.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128619027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaizhang Kang, Zimin Chen, Jiaping Wang, Kun Zhou, Hongzhi Wu
{"title":"Learning optimal lighting patterns for efficient SVBRDF acquisition","authors":"Kaizhang Kang, Zimin Chen, Jiaping Wang, Kun Zhou, Hongzhi Wu","doi":"10.1145/3230744.3230779","DOIUrl":"https://doi.org/10.1145/3230744.3230779","url":null,"abstract":"Digitally acquiring high-quality material appearance from the real-world is challenging, with applications in visual effects, e-commerce and entertainment. One popular class of existing work is based on hand-derived illumination multiplexing [Ghosh et al. 2009], using hundreds of patterns in the most general case [Chen et al. 2014].","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130623991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Progressive real-time rendering of unprocessed point clouds","authors":"Markus Schütz, M. Wimmer","doi":"10.1145/3230744.3230816","DOIUrl":"https://doi.org/10.1145/3230744.3230816","url":null,"abstract":"Rendering tens of millions of points in real time usually requires either high-end graphics cards, or the use of spatial acceleration structures. We introduce a method to progressively display as many points as the GPU memory can hold in real time by reprojecting what was visible and randomly adding additional points to uniformly converge towards the full result within a few frames. Our method heavily limits the number of points that have to be rendered each frame and it converges quickly and in a visually pleasing way, which makes it suitable even for notebooks with low-end GPUs. The data structure consists of a randomly shuffled array of points that is incrementally generated on-the-fly while points are being loaded. Due to this, it can be used to directly view point clouds in common sequential formats such as LAS or LAZ while they are being loaded and without the need to generate spatial acceleration structures in advance, as long as the data fits into GPU memory.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130573349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}