Fangeheng Zhong, G. Koulieris, G. Drettakis, M. Banks, Mathieu Chambe, F. Durand, Rafał K. Mantiuk
{"title":"DiCE: dichoptic contrast enhancement for binocular displays","authors":"Fangeheng Zhong, G. Koulieris, G. Drettakis, M. Banks, Mathieu Chambe, F. Durand, Rafał K. Mantiuk","doi":"10.1145/3306214.3338578","DOIUrl":"https://doi.org/10.1145/3306214.3338578","url":null,"abstract":"In stereoscopic displays, such as those used in VR/AR headsets, our two eyes are presented with different views. The disparity between the views is typically used to convey depth cues, but it could be used for other purposes. We devise a novel technique that takes advantage of binocular fusion to boost perceived local contrast and visual quality of images. Since the technique is based on fixed tone-curves, it has negligible computational cost and it is well suited for real-time applications, such as VR rendering. To control the trade-off between the level of enhancement and binocular rivalry, we conduct a series of experiments that lead to a new finding, explaining the factors that dominate the rivalry perception in a dichoptic presentation where two images of different contrasts are displayed. With this new inding, we demonstrate that the enhancement can be quantitatively measured and binocular rivalry is well controlled.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133228820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Murnane, Max Breitmeyer, Francis Ferraro, Cynthia Matuszek, Don Engel
{"title":"Learning from human-robot interactions in modeled scenes","authors":"Mark Murnane, Max Breitmeyer, Francis Ferraro, Cynthia Matuszek, Don Engel","doi":"10.1145/3306214.3338546","DOIUrl":"https://doi.org/10.1145/3306214.3338546","url":null,"abstract":"There is increasing interest in using robots in simulation to understand and improve human-robot interaction (HRI). At the same time, the use of simulated settings to gather training data promises to help address a major data bottleneck in allowing robots to take advantage of powerful machine learning approaches. In this paper, we describe a prototype system that combines the robot operating system (ROS), the simulator Gazebo, and the Unity game engine to create human-robot interaction scenarios. A person can engage with the scenario using a monitor wall, allowing simultaneous collection of realistic sensor data and traces of human actions.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127327995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A formal process to design visual archetypes based on character taxonomies","authors":"Angela Wang, Anthony Dalton Eason, E. Akleman","doi":"10.1145/3306214.3338579","DOIUrl":"https://doi.org/10.1145/3306214.3338579","url":null,"abstract":"While there are many professional examples of successful character designs, there seems to be little academic formalization in standardizing a process to achieve consistent visual results. In this work, we present such a formal process to construct visual designs for character archetypes that are given by \"verbal descriptions\". This process is based on visual semiotics that are used for creating clear meaning behind design choices while still retaining a sense of aesthetic through principles of artistic design. Using this process, we have developed a set of encyclopedic references for a wide variety of psychology and literary archetypes to demonstrate the power of this approach. We also used this process successfully in a visual storytelling class.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129597770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Meet in rain: a serious game to help the better appreciation of Chinese poems","authors":"Ye-Ning Jiang, H. Nishino","doi":"10.1145/3306214.3338587","DOIUrl":"https://doi.org/10.1145/3306214.3338587","url":null,"abstract":"Meet in Rain is a serious game on Chinese poetry. While invoking various events, players must complete given tasks, which help them better appreciate the poems, by exploring imaginary sceneries that depict Chinese poems. Its visual design also mimics Chinese paintings in the era when the poems were created. As only a few serious games exist for Chinese poetry and they mostly focus on knowledge acquisition, our work provides a rare design exemplar of a serious game that is designed with the intention to foster aesthetic appreciation for a cultural subject.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129628410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vector based glyph style transfer","authors":"P. Dhanuka, N. Kumawat, Nipun Jindal","doi":"10.1145/3306214.3338600","DOIUrl":"https://doi.org/10.1145/3306214.3338600","url":null,"abstract":"In this work, we solve the problem of real-time transfer of geometric style from a single glyph to the entire glyph set of a vector font. In our solution, a single glyph is defined as one or more closed Bézier paths which is further broken down in primitives to define a set of segments. The modification to these segments is percolated to the entire glyph set by comparing the set of segments across glyphs using techniques like the order and direction of segments and the spatial placement of segments. Once the target segments in other glyphs is identified the transformation from style glyph is applied to the target glyph. Furthermore, we establish user-controlled policies for percolation of style like mapping line segment modification to curve segments. This extension to the algorithm enables the user to create multiple variations of a glyph.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131554821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, S. Mudur
{"title":"ISSv2 and OpenISS distributed system for real-time interaction for performing arts","authors":"Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, S. Mudur","doi":"10.1145/3306214.3338539","DOIUrl":"https://doi.org/10.1145/3306214.3338539","url":null,"abstract":"Illimitable Space System v2 is a configurable toolbox which provides multimodal interaction and serves as a platform for artists to enhance their performance through the use of depth and colour data from a 3D capture device. Its newest iteration was presented as part of ChineseCHI in 2018. This latest iteration of ISSv2 is powered by an open source core named OpenISS. The core allows the ISSv2 platform to be run as a distributed system. Video and depth capture are done from a computer acting as a server with a client component for displaying the applied effects and video from a web browser. This has the added benefit of allowing the artist to broadcast their performance live and opens the way for audience interaction. There are two primary motivations behind creating an open source core for the ISS: first, open source tech allows more people to participate in the development process as well as understand how the technology works while spreading maintenance responsibilities to their respective parties. Secondly, having a core allows parts of they system to be switched out at will without having to modify it all at once, this is particularly relevant with respect to capture devices.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"50 22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122440977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanxiang Zhang, Yirun Shen, Weiwei Zhang, Z. Zhu, Pengfei Ma
{"title":"Interactive spatial augmented reality system for Chinese opera","authors":"Yanxiang Zhang, Yirun Shen, Weiwei Zhang, Z. Zhu, Pengfei Ma","doi":"10.1145/3306214.3338566","DOIUrl":"https://doi.org/10.1145/3306214.3338566","url":null,"abstract":"In this research, the authors designed an interactive spatial augmented reality system for stage performance based on the technologies of UWB positioning and Bluetooth triggering. The position of the actor is obtained through the antenna tag carried by the actor and the signal base station placed on the stage. Special effects can be triggered through the Bluetooth module according to the actor. The system has a higher degree of freedom in practical applications, which can present an interactive spatial augmented reality effect, and therefore provide new possibilities for the application of spatial augmented reality in the stage performance. The system could bring better immersive experience to the audiences, and it also brings new possibilities for the aesthetic creation of opera.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125021479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPGPU acceleration of environmental and movement datasets","authors":"Daniel Bird, S. Laycock","doi":"10.1145/3306214.3338584","DOIUrl":"https://doi.org/10.1145/3306214.3338584","url":null,"abstract":"Due to the increased availability and accuracy of GPS sensors, the field of movement ecology has been able to benefit from larger datasets of movement data. As miniaturisation and the efficiency of electronic components have improved, additional sensors have been coupled with GPS tracking to enable features related to the animal's state at a given position to be recorded. This capability is especially relevant to understand how environmental conditions may affect movement.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127939345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Mannone, Eri Kitamura, Jiawei Huang, Ryo Sugawara, Pascal Chiu, Y. Kitamura
{"title":"CubeHarmonic: a new musical instrument based on Rubik's cube with embedded motion sensor","authors":"Maria Mannone, Eri Kitamura, Jiawei Huang, Ryo Sugawara, Pascal Chiu, Y. Kitamura","doi":"10.1145/3306214.3338572","DOIUrl":"https://doi.org/10.1145/3306214.3338572","url":null,"abstract":"A contemporary challenge involves scientific education and the connection between new technologies and the heritage of the past. CubeHarmonic (CH) joins novelty and tradition, creativity and education, science and art. It takes shape as a novel musical instrument where magnetic 3D motion tracking technology meets musical performance and composition. CH is a Rubik's cube with a note on each facet, and a chord or chord sequence on each face. The position of each facet is detected through magnetic 3D motion tracking. While scrambling the cube, the performer gets new chords and new chord sequences. CH can be used to compose, improvise,1 and teach music and mathematics (group theory, permutations) with colors and physical manipulation supporting abstract thinking. Furthermore, CH allows visually impaired people to enjoy Rubik's cube manipulation by using sounds instead of colors.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128338968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Ratto, M. Szeto, D. Slocum, Kevin Del Bene
{"title":"OceanGAN","authors":"Christopher Ratto, M. Szeto, D. Slocum, Kevin Del Bene","doi":"10.1145/3306214.3338559","DOIUrl":"https://doi.org/10.1145/3306214.3338559","url":null,"abstract":"Physics-based models for ocean dynamics and optical raytracing are used extensively for rendering maritime scenes in computer graphics [Darles et al. 2011]. Raytracing models can provide high-fidelity representations of an ocean image with full control of the underlying environmental conditions, sensor specifications, and viewing geometry. However, the computational expense of rendering ocean scenes can be high. This work demonstrates an alternative approach to ocean raytracing via machine learning, specifically Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]. In this paper, we demonstrate that a GAN trained on several thousand small scenes produced by a raytracing model can be used to generate megapixel scenes roughly an order of magnitude faster with a consistent wave spectrum and minimal processing artifacts.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121366476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}