{"title":"Balanced Glass Design: A flavor perception changing system by controlling the center-of-gravity","authors":"Masaharu Hirose, M. Inami","doi":"10.1145/3450550.3465344","DOIUrl":"https://doi.org/10.1145/3450550.3465344","url":null,"abstract":"In this paper, we propose Balanced Glass Design, a system to change flavor perception. The system consists of glass-type device shifting its center of gravity in response to the user’s motion which allows drinking a beverage with a virtual perception of weight through drinking motion. We thought It’s possible to intervene in the user’s perception of flavor by displaying virtual weight perception, and so conducted experiments on weight perception and demonstrations as a user study. This paper describes the system design, the result of experiments, and comments obtained through a user study.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114483952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Matsuda, B. Wheelwright, J. Hegland, Douglas Lanman
{"title":"Reverse Pass-Through VR","authors":"N. Matsuda, B. Wheelwright, J. Hegland, Douglas Lanman","doi":"10.1145/3450550.3465338","DOIUrl":"https://doi.org/10.1145/3450550.3465338","url":null,"abstract":"We introduce reverse pass-through VR, wherein a three-dimensional view of the wearer’s eyes is presented to multiple outside viewers in a perspective-correct manner, with a prototype headset containing a world-facing light field display. This approach, in conjunction with existing video (forward) pass-through technology, enables more seamless interactions between people with and without headsets in social or professional contexts. Reverse pass-through VR ties together research in social telepresence and copresence, autostereoscopic displays, and facial capture to enable natural eye contact and other important non-verbal cues in a wider range of interaction scenarios.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"69 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123117745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shan-Yuan Teng, Pengyu Li, Romain Nith, Joshua Fonseca, Pedro Lopes
{"title":"Demonstrating Touch&Fold: A Foldable Haptic Actuator for Rendering Touch in Mixed Reality","authors":"Shan-Yuan Teng, Pengyu Li, Romain Nith, Joshua Fonseca, Pedro Lopes","doi":"10.1145/3450550.3465340","DOIUrl":"https://doi.org/10.1145/3450550.3465340","url":null,"abstract":"We propose a nail-mounted foldable haptic device that provides tactile feedback to mixed reality (MR) environments by pressing against the user's fingerpad when a user touches a virtual object. What is novel in our device is that it quickly tucks away when the user interacts with real-world objects. Its design allows it to fold back on top of the user's nail when not in use, keeping the user's fingerpad free to, for instance, manipulate handheld tools and other objects while in MR. To achieve this, we engineered a wireless and self-contained haptic device, which measures 24×24×41 mm and weighs 9.5 g. Furthermore, our foldable end-effector also features a linear resonant actuator, allowing it to render not only touch contacts (i.e., pressure) but also textures (i.e., vibrations). We demonstrate how our device renders contacts with MR surfaces, buttons, low- and high-frequency textures.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128032104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MetamorHockey: A Projection-based Virtual Air Hockey Platform Featuring Transformable Mallet Shapes","authors":"Shun Ueda, S. Kagami, K. Hashimoto","doi":"10.1145/3450550.3465341","DOIUrl":"https://doi.org/10.1145/3450550.3465341","url":null,"abstract":"We propose a novel projection-based virtual air hockey system in which not only the puck but also the mallet is displayed as an image. Being a projected image, the mallet can freely “metamorphose” into different shapes, which expands the game design beyond the original air hockey. We discuss possible scenarios with a resizable mallet, with mallet shapes defined by drawing, and with a mallet whose collision conditions can be modified. A key challenge in implementation is to minimize the latency because the direct manipulation nature of the mallet positioning imposes a higher demand on latency than the puck positioning. By using a high-speed camera and a high-speed projector running at 420 fps, a satisfactorily quick tracking became possible such that we feel a projected mallet head to be an integral part of a mallet held by hand.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132222896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daisuke Akagawa, Junichi Takatsu, Ryoji Otsu, Seiichi Hayashi, Benjamin Vallet
{"title":"Sustainable society with a touchless solution using UbiMouse under the pandemic of COVID-19","authors":"Daisuke Akagawa, Junichi Takatsu, Ryoji Otsu, Seiichi Hayashi, Benjamin Vallet","doi":"10.1145/3450550.3470532","DOIUrl":"https://doi.org/10.1145/3450550.3470532","url":null,"abstract":"This paper introduces a new artificial intelligence software which is capable of controlling devices using fingers in the air. With Ubimouse, touch-panels, restaurant ordering systems, ATM systems, and etc., which are commonly used by various people in public, can be contact-less devices. These touch-less devices, especially under the harsh conditions of COVID-19, are desired to prevent infections mediated by touch devices. Also, these devices cannot be used while wearing gloves due to their touch sensing fault. Thus, in fields using gloves, there is a demand for non-contact device operation. To satisfy these demands, we have developed “UbiMouse”. This is an AI software that allows you to operate the device by moving your fingers toward the device. In the AIs in UbiMouse, a convolution model and a regression model are used to identify fingers’ features from camera footage and to estimates the position of a detected finger, respectively. We demonstrate an operation of UbiMouse without contact. Along with this operation in the air, a mouse cursor is guided to the specified location with high accuracy.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115353711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polarimetric Spatio-Temporal Light Transport Probing","authors":"Seung-Hwan Baek, Felix Heide","doi":"10.1145/3450550.3469737","DOIUrl":"https://doi.org/10.1145/3450550.3469737","url":null,"abstract":"Fig. 1. We propose a computational light transport probingmethod that decomposes transport into full polarization, spatial and temporal dimensions.Wemodel this multi-dimensional light transport as a tensor and analyze low-rank structure in the polarization domain which is exploited by our polarimetric probing method. We instantiate our approach with two imaging systems for spatio-polarimetric and coaxial temporal-polarimetric capture. (a)&(d) Conventional intensity imagers integrate incident light intensity over space and time independently of the polarization states of light, losing geometric and material information encoded in the polarimetric transport. Capturing polarization-resolved spatial transport components of (b) epipolar and (c) non-epipolar dimensions enable fine-grained decomposition of light transport. Combining temporal and polarimetric dimensions, we separate (e) geometry-dependent reflections and (f) direct/indirect reflections that cannot be resolved in the temporal-only measurements.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115569596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Mazursky, Shan-Yuan Teng, Romain Nith, Pedro Lopes
{"title":"Demonstrating MagnetIO: Passive yet Interactive Soft Haptic Patches Anywhere","authors":"Alex Mazursky, Shan-Yuan Teng, Romain Nith, Pedro Lopes","doi":"10.1145/3450550.3465342","DOIUrl":"https://doi.org/10.1145/3450550.3465342","url":null,"abstract":"We demonstrate a new type of haptic actuator, which we call MagnetIO, that is comprised of two parts: one battery-powered voice-coil worn on the user's fingernail and any number of interactive soft patches that can be attached onto any surface (everyday objects, user's body, appliances, etc.). When the user's finger wearing our voice-coil contacts any of the interactive patches it detects its magnetic signature via magnetometer and vibrates the patch, adding haptic feedback to otherwise input-only interactions. To allow these passive patches to vibrate, we make them from silicone with regions doped with polarized neodymium powder, resulting in soft and stretchable magnets. This stretchable form-factor allows them to be wrapped to the user's body or everyday objects of various shapes. We demonstrate how these add haptic output to many situations, such as adding haptic buttons to the walls of one's home.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127318317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality representation of virtual user avatars moving in a virtual representation of the real world at their respective real world locations","authors":"Christoph Leuze, Matthias Leuze","doi":"10.1145/3450550.3465337","DOIUrl":"https://doi.org/10.1145/3450550.3465337","url":null,"abstract":"In this work we present an augmented reality (AR) application that allows a user with an AR display to watch another user, flying an airplane in the Microsoft Flight Simulator 2020 (MSFS), at their respective location in the real world. To do that, we take the location data of a virtual 3D airplane model in a virtual representation of the world of a user playing MSFS, and stream it via a server to a mobile device. The mobile device user can then see the same 3D airplane model at exactly that real world location, that corresponds to the location of the virtual 3D airplane model in the virtual representation of the world. The mobile device user can also see the avatar movement updated according to the 3D airplane movement in the virtual world. We implemented the application on both a cellphone and a see-through headset.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130066807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azumi Maekawa, Hiroto Saito, Narin Okazaki, Shunichi Kasahara, M. Inami
{"title":"Behind The Game: Implicit Spatio-Temporal Intervention in Inter-personal Remote Physical Interactions on Playing Air Hockey","authors":"Azumi Maekawa, Hiroto Saito, Narin Okazaki, Shunichi Kasahara, M. Inami","doi":"10.1145/3450550.3465348","DOIUrl":"https://doi.org/10.1145/3450550.3465348","url":null,"abstract":"When playing inter-personal sports games remotely, the time lag between user actions and feedback decreases the user’s performance and sense of agency. While computational assistance can improve performance, naive intervention independent of the context also compromises the user’s sense of agency. We propose a context-aware assistance method that retrieves both user performance and sense of agency, and we demonstrate the method using air hockey (a two-dimensional physical game) as a testbed. Our system includes a 2D plotter-like machine that controls the striker on half of the table surface, and a web application interface that enables manipulation of the striker from a remote location. Using our system, a remote player can play against a physical opponent from anywhere through a web browser. We designed the striker control assistance based on the context by computationally predicting the puck’s trajectory using a real-time captured video image. With this assistance, the remote player exhibits an improved performance without compromising their sense of agency, and both players can experience the excitement of the game.","PeriodicalId":286424,"journal":{"name":"ACM SIGGRAPH 2021 Emerging Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}