Yuji Suzuki, Jotaro Shigeyama, S. Yoshida, Takuji Narumi, T. Tanikawa, M. Hirose
{"title":"Food texture manipulation by face deformation","authors":"Yuji Suzuki, Jotaro Shigeyama, S. Yoshida, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1145/3230744.3230814","DOIUrl":"https://doi.org/10.1145/3230744.3230814","url":null,"abstract":"Food texture plays an important role in the experience of food. Researchers have proposed various methods to manipulate the perception of food texture using auditory and physical stimulation. In this paper, we demonstrate a system to present visually modified mastication movements in real-time to manipulate the perception of food texture, because visual stimuli efficiently work to enrich other food-related perceptions and showing someone their deformed posture changes somatosensory perception. The result of our experiments suggested that adding real-time feedback of facial deformation when participants open their mouths can increase the perceived chewiness of foods. Moreover, perceptions of hardness and adhesiveness were improved when the participants saw their modified face or listened to their non-modified chewing sound, while both perceptions were decreased when participants were presented with both stimuli. These results indicate the occurrence of the contrast effect.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121511625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Gitlina, D. S. Dhillon, Jan Hansen, D. Pai, A. Ghosh
{"title":"Practical measurement-based spectral rendering of human skin","authors":"Y. Gitlina, D. S. Dhillon, Jan Hansen, D. Pai, A. Ghosh","doi":"10.1145/3230744.3230795","DOIUrl":"https://doi.org/10.1145/3230744.3230795","url":null,"abstract":"Realistic appearance modeling of human skin is an important research topic with a variety of application in computer graphics. Various diffusion based BSSRDF models [Jensen et al. 2001, Donner and Jensen 2005, Donner and Jensen 2006] have been introduced in graphics to efficiently simulate subsurface scattering in skin including modeling its layered structure. These models however assume homogeneous subsurface scattering parameters and produce spatial color variation using an albedo map. In this work, we build upon the spectral scattering model of [Donner and Jensen 2006] and target a practical measurement-based rendering approach for such a spectral BSSRDF. The model assumes scattering in the two primary layers of skin (epidermis and dermis respectively) can be modeled with relative melanin and hemoglobin chromophore concentrations respectively. To drive this model for realistic rendering, we employ measurements of skin patches using an off-the-shelf Miravex Antera 3D camera which provides spatially varying maps of these chromophore concentrations as well as corresponding 3D surface geometry (see Figure 1) using a custom imaging setup.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123329244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wall mounted level: a cooperative mixed reality game about reconciliation","authors":"K. Swearingen, Scott Swearingen","doi":"10.1145/3230744.3230771","DOIUrl":"https://doi.org/10.1145/3230744.3230771","url":null,"abstract":"Wall Mounted Level is a cooperative mixed-reality game that leverages multimodal interactions to support its narrative of 'reconciliation'. In it, players control their digitally projected characters and navigate them across a hand drawn physical sculpture as they collaborate towards a shared goal: finding one another. The digital and physical characteristics of the game are further reflected in the ways in which players interact with it, by making use of digital input devices and physical 'touch'. The abstract and poster discuss the design choices that were made for creating the varying modes of engagement and the motivation behind player collaboration in 'Wall Mounted Level.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123147614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina Krösl, A. Felnhofer, J. Kafka, L. Schuster, Alexandra Rinnerthaler, M. Wimmer, O. Kothgassner
{"title":"The virtual schoolyard: attention training in virtual reality for children with attentional disorders","authors":"Katharina Krösl, A. Felnhofer, J. Kafka, L. Schuster, Alexandra Rinnerthaler, M. Wimmer, O. Kothgassner","doi":"10.1145/3230744.3230817","DOIUrl":"https://doi.org/10.1145/3230744.3230817","url":null,"abstract":"This work presents a virtual reality simulation for training different attentional abilities in children and adolescents. In an interdisciplinary project between psychology and computer science, we developed four mini-games that are used during therapy sessions to battle different aspects of attentional disorders. First experiments show that the immersive game-like application is well received by children. Our tool is also currently part of a treatment program in an ongoing clinical study.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126830793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Skin+","authors":"Feier Cao, M. Y. Saraiji, K. Minamizawa","doi":"10.1145/3230744.3230772","DOIUrl":"https://doi.org/10.1145/3230744.3230772","url":null,"abstract":"Wearable technologies have been supporting and augmenting our body and sensory functions for a long time. Skin+ introduces a novel bidirectional on-skin interface that serve not only as haptic feedback to oneself but also as a visual display to mediate touch sensation to others as well. In this paper, we describe the design of Skin+ and its usability in a variety of applications. We use a shape-changing auxetic structure to build this programmable coherent visuo-tactile interface. The combination of shape-memory alloy with an auxetic structure enables a lightweight haptic device that can be worn seamlessly on top of our skin.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126869848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Which BSSRDF model is better for heterogeneous materials?","authors":"Keiko Nakamoto, T. Koike","doi":"10.1145/3230744.3230761","DOIUrl":"https://doi.org/10.1145/3230744.3230761","url":null,"abstract":"We present an improved method for rendering heterogeneous translucent materials with existing BSSRDF models. In the general BSSRDF models, the optical properties of the target object are constant. Sone et al. have proposed a method to combine with existing BSSRDF models for rendering heterogeneous materials. However, the method generates more bright and blurred images compared with correctly simulated images. We have experimented with various BSSRDF models by the method and rendered heterogeneous materials. As a result, the rendered image with the better dipole model is the closest to the result of Monte carlo simulation. If incorporating the better dipole model into the method proposed by Sone et al., we can render more realistic images of heterogeneous materials.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114560277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MegaParallax: 360° panoramas with motion parallax","authors":"Tobias Bertel, Christian Richardt","doi":"10.1145/3230744.3230793","DOIUrl":"https://doi.org/10.1145/3230744.3230793","url":null,"abstract":"Capturing 360° panoramas has become straightforward now that this functionality is implemented on every phone. However, it remains difficult to capture immersive 360° panoramas with motion parallax, which provide different views for different viewpoints. Alternatives such as omnidirectional stereo panoramas provide different views for each eye (binocular disparity), but do not support motion parallax, while Casual 3D Photography [Hedman et al. 2017] reconstructs textured 3D geometry that provides motion parallax but suffers from reconstruction artefacts. We propose a new image-based approach for capturing and rendering high-quality 360° panoramas with motion parallax. We use novel-view synthesis with flow-based blending to turn a standard monoscopic video into an enriched 360° panoramic experience that can be explored in real time. Our approach makes it possible for casual consumers to capture and view high-quality 360° panoramas with motion parallax.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122736115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kilian, Hui Wang, E. Schling, J. Schikore, H. Pottmann
{"title":"Curved support structures and meshes with spherical vertex stars","authors":"M. Kilian, Hui Wang, E. Schling, J. Schikore, H. Pottmann","doi":"10.1145/3230744.3230787","DOIUrl":"https://doi.org/10.1145/3230744.3230787","url":null,"abstract":"The computation and construction of curved beams along freeform skins pose many challenges. We show how to use surfaces of constant mean curvature (CMC) to compute beam networks with beneficial properties, both aesthetically and from a fabrication perspective. To explore variations of such networks we introduce a new discretization of CMC surfaces as quadrilateral meshes with spherical vertex stars and right node angles. The computed non-CMC surface variations can be seen as a path in design space - exploring possible solutions in a neighborhood, or represent an actual erection sequence exploiting elastic material behavior.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep motion transfer without big data","authors":"Byungjun Kwon, Moonwon Yu, Hanyoung Jang, KyuHyun Cho, Hyundong Lee, T. Hahn","doi":"10.1145/3230744.3230751","DOIUrl":"https://doi.org/10.1145/3230744.3230751","url":null,"abstract":"This paper presents a novel motion transfer algorithm that copies content motion into a specific style character. The input consists of two motions. One is a content motion such as walking or running, and the other is movement style such as zombie or Krall. The algorithm automatically generates the synthesized motion such as walking zombie, walking Krall, running zombie, or running Krall. In order to obtain natural results, the method adopts the generative power of deep neural networks. Compared to previous neural approaches, the proposed algorithm shows better quality, runs extremely fast, does not require big data, and supports user-controllable style weights.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121271998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CRISPR/Cas9-NHEJ: action in the nucleus","authors":"Martina R. Fröschl, A. Vendl","doi":"10.1145/3230744.3230747","DOIUrl":"https://doi.org/10.1145/3230744.3230747","url":null,"abstract":"CRISPR/Cas9-NHEJ: Action in the Nucleus (2017) is derived from an interdisciplinary creative process. This paper discusses the creation of this 210° scientific visualization, the usage of data from the worldwide Protein Data Bank, and the audio-visual presentation in an interactive dome setup. Since the topic is significant for the future of humanity, immersive experiences should be considered to convey tacit knowledge of gene-editing processes to make them approachable for the general public.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"146 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116079574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}