{"title":"Guidance field: vector field for implicit guidance in virtual environments","authors":"R. Tanaka, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1145/2929464.2929468","DOIUrl":"https://doi.org/10.1145/2929464.2929468","url":null,"abstract":"A 'guidance field' is a kind of a vector field that implicitly guides users to a target point. Users' input for travelling in virtual environments is slightly altered to get closer to the target directions according to the guidance field.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122647659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tetiana Parshakova, Minjoo Cho, Á. Cassinelli, D. Saakes
{"title":"Ratchair: furniture learns to move itself with vibration","authors":"Tetiana Parshakova, Minjoo Cho, Á. Cassinelli, D. Saakes","doi":"10.1145/2929464.2929473","DOIUrl":"https://doi.org/10.1145/2929464.2929473","url":null,"abstract":"An Egyptian statue on display at the Manchester Museum mysteriously spins on its axis every day; it is eventually discovered that this is due to anisotropic friction forces, and that the motile power comes from imperceptible mechanical waves caused by visitors' footsteps and nearby traffic. This phenomena involves microscopic ratchets, and is pervasive in the microscopic world - this is basically how muscles contract. It was the source of inspiration to think about everyday objects that move by harvesting external vibration rather than using mechanical traction and steering wheels. We propose here a strategy for displacing objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a very different specified position. We describe and demonstrate two application scenarios, namely assisted transportation of heavy objects with little effort on the part of the human and self arranging furniture, useful for instance to clean classrooms or restaurants during vacant hours.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FinGAR: combination of electrical and mechanical stimulation for high-fidelity tactile presentation","authors":"Vibol Yem, Ryuta Okazaki, H. Kajimoto","doi":"10.1145/2929464.2929474","DOIUrl":"https://doi.org/10.1145/2929464.2929474","url":null,"abstract":"It is known that our touch sensation is a result of activities of four types of mechanoreceptors, each of which responds to different types of skin deformation; pressure, low frequency vibration, high frequency vibration, and shear stretch. If we could selectively activate these receptors, we could combine and present any types of tactile sensation. This approach has been studied but not fully achieved. In our study, we developed FinGAR (Finger Glove for Augmented Reality), in which we combined electrical and mechanical stimulation to selectively stimulate these four channels and thus to achieve high-fidelity tactile sensation. The electrical stimulation with array of electrodes presents pressure and low frequency vibration with high spatial resolution, while the mechanical stimulation with DC motor presents high frequency vibration and shear deformation of the whole finger. Furthermore, FinGAR is lightweight, simple in mechanism, easy to wear, and does not disturb the natural movement of the finger, all of which are necessary for general-purpose virtual reality system.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115883211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LightAir: a novel system for tangible communication with quadcopters using foot gestures and projected image","authors":"Mikhail Matrosov, O. Volkova, D. Tsetserukou","doi":"10.1145/2929464.2932429","DOIUrl":"https://doi.org/10.1145/2929464.2932429","url":null,"abstract":"We propose a new paradigm of human-drone interaction through projecting image on the road and foot gestures. The proposed technology allowed to create a new type of tangible interaction with drone, i.e., DroneBall game for augmented sport and FlyMap to let a drone know where to fly. We developed LightAir system that makes possible information sharing, GPS-navigating, controlling and playing with drones in a tangible way. In contrast to the hand gestures, that are common for smartphones, we came up with the idea of foot gestures and projected image for tangible interaction. Such gestures make communication with the drone intuitive, natural, and safe. To our knowledge, it is the world's first system that provides the human-drone bilateral tangible interaction.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128390975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zushiki light art: form finding and making through paper folding","authors":"Jiangmei Wu","doi":"10.1145/2929464.2956557","DOIUrl":"https://doi.org/10.1145/2929464.2956557","url":null,"abstract":"","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134185679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroo Iwata, Y. Kimura, Hikaru Takatori, Yuki Enzaki
{"title":"Big Robot Mk.1A","authors":"Hiroo Iwata, Y. Kimura, Hikaru Takatori, Yuki Enzaki","doi":"10.1145/2929464.2929466","DOIUrl":"https://doi.org/10.1145/2929464.2929466","url":null,"abstract":"The Big Robot Mk.1A has two legs with wheels, mounting the pilot at 5m height position. The robot goes forward according with the motion of the feet of the pilot. It is programed to make trajectory of head position of 5m humanoid. Thus, the pilot feels as if his/her body were extended to 5m giant.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125582334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LiDARMAN: reprogramming reality with egocentric laser depth scanning","authors":"Takashi Miyaki, J. Rekimoto","doi":"10.1145/2929464.2929481","DOIUrl":"https://doi.org/10.1145/2929464.2929481","url":null,"abstract":"This paper introduces a method to reprogram reality by substituting visual perception. According to several studies in psychology, our construction of reality is only a subjective experience, and we have an ability to adapt the modified perception unconsciously. As we use a Light Detection And Ranging (LiDAR) sensor to provide altered vision, the system can provide a novel 3D reconstructed view from outside of the body. We explore factors that affect the behavior of the user with the out-of-body vision using a prototype of our proposed system LiDARMAN. Three different representations (1st person camera, 3rd person camera, plan view) are investigated to explore potential applications such like navigation, security, or remote collaboration.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120848843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi
{"title":"Layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending","authors":"M. Y. Saraiji, Shota Sugimoto, C. Fernando, K. Minamizawa, S. Tachi","doi":"10.1145/2929464.2929467","DOIUrl":"https://doi.org/10.1145/2929464.2929467","url":null,"abstract":"We propose \"Layered Telepresence\", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115901793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keigo Matsumoto, Yuki Ban, Takuji Narumi, Yohei Yanase, T. Tanikawa, M. Hirose
{"title":"Unlimited corridor: redirected walking techniques using visuo haptic interaction","authors":"Keigo Matsumoto, Yuki Ban, Takuji Narumi, Yohei Yanase, T. Tanikawa, M. Hirose","doi":"10.1145/2929464.2929482","DOIUrl":"https://doi.org/10.1145/2929464.2929482","url":null,"abstract":"The main contribution is to realize an efficient redirected working (RDW) technique by utilizing haptic cues for strongly modifying our spatial perception. Some research has shown that users can be redirected on a circular arc with a radius of at least 22 m without being able to detect the inconsistency by showing a straight path in the virtual world. However, this is still too large to enable the presentation of a demonstration in a restricted space. Although most of RDW techniques only used visual stimuli, we recognize space with multi-modalities. Therefore, we propose an RDW method using the visuo-haptic interaction, and develop the system, which displays a visual representation of a flat wall and users virtually walk straight along it, although, in reality, users walk along a convex surface wall with touching it. For the demonstration, we develop the algorithm, with which we can modify the amount of distortion dynamically to make a user walk straight infinity and turn a branch freely. With this system, multiple users can walk an endless corridor in a virtual environment at the same time.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115195368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoshikazu Furuyama, Atsushi Matsubayashi, Yasutoshi Makino, H. Shinoda
{"title":"X-SectionScope: cross-section projection in light field clone image","authors":"Yoshikazu Furuyama, Atsushi Matsubayashi, Yasutoshi Makino, H. Shinoda","doi":"10.1145/2929464.2929483","DOIUrl":"https://doi.org/10.1145/2929464.2929483","url":null,"abstract":"In this paper, we propose a novel interactive 3D information visualizing display that superimposes a cross-sectional image in an aerial volumetric image of an object. Figure 2 shows a system configuration. A user can see internal images of the object, such like an X-ray image, by inserting a semi-transparent handheld screen in a cloned floating image. We use two Micro Mirror Array Plates (MMAPs) to reproduce a Light Field Clone (LFC) image. The MMAP (Aerial Imaging Plate, ASUKANET. Co., Ltd.) was designed for reconstructing aerial images in midair based on double reflections. The LFC image is a reconstructed floating 3D image which can be seen without wearing any glasses. In the HaptoClone system [Makino et al. 2015], they proposed the use of two MMAPs to reproduce LFC image of the object. By contrast, we use additional two general mirrors in the proposed system. With this configuration, the realistic LFC image appears next to the object keeping the facing direction same. Users can see both the real object and its cloned image at the same time.","PeriodicalId":314962,"journal":{"name":"ACM SIGGRAPH 2016 Emerging Technologies","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116421013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}