{"title":"Towards assembly steps recognition in augmented reality","authors":"A. Rukubayihunga, Jean-Yves Didier, S. Otmane","doi":"10.1145/2927929.2927953","DOIUrl":"https://doi.org/10.1145/2927929.2927953","url":null,"abstract":"Augmented Reality is a media which purpose is to attach digital information to real world scenes in order to enhance the user experience. It has been used in the field of maintenance in order to show the user the operations he has to perform. Our goal is to go one step further so that our system is able to detect when the user has performed a step of the task. It requires some understanding of what is occurring and where objects are located in order to display correct instructions for the task. This paper is focusing on using an intermediate computation result of the usual augmented reality process, which is the pose computation: we propose to use the transformation matrix not only for objects pose estimation, but also to characterise their motion during an assembly task. With this matrix, we can induce spatial relationship between assembly parts and determine which motion occurs. Then we analyse translation and rotation parameters contained in the transformation matrix during the action. We demonstrate that these data correctly characterise the movement between object's fragments. Therefore, by analysing such a matrix, not only we can achieve the required registration step of the augmented reality process, but we can also understand the actions performed by the user.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"387 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123196603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modelling simple human-robot collaborative manufacturing tasks in interactive virtual environments","authors":"Elias Matsas, G. Vosniakos, Dimitrios Batras","doi":"10.1145/2927929.2927948","DOIUrl":"https://doi.org/10.1145/2927929.2927948","url":null,"abstract":"This paper presents in brief a novel interactive Virtual Environment (VE) that simulates in real-time collaborative manufacturing tasks between a human and an industrial robotic manipulator, working in close proximity, while sharing their workspaces. The use case scenario is highly collaborative and incorporates a wide variety of interaction tasks, such as: collaborative handling, manipulation, removal, placement and laying of carbon fabric composite parts. A Kinect sensor and a Head Mounted Display (Oculus Rift) are employed as 3D User Interfaces for interaction, immersion and skeletal tracking of the user motion. In this paper, particular emphasis is given to the various interaction techniques used to facilitate implementation of virtual Human-Robot Collaboration (HRC). The collaborative tasks are principally executed with contactless, natural and direct interaction. In addition, two novel interaction metaphors were developed. The real fabric laying task and the backing film removal task are reproduced in the VE with the implementation of the \"follow-my-hand\" technique; the user has to follow with his hand a virtual hand-like index (guide) that moves along a predefined pattern. Preliminary findings concerning the effectiveness of HRC modelling tasks are positive, and are briefly discussed.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126046356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye tracking for understanding aesthetic of ambiguity","authors":"Elhem Younes, John Bardakos, A. Lioret","doi":"10.1145/2927929.2927960","DOIUrl":"https://doi.org/10.1145/2927929.2927960","url":null,"abstract":"In this paper, we describe two interactive installations designed using eye tracking technology to explore perception and imagination processes in the presence of ambiguous art forms.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128471409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smart glasses with a peripheral vision display","authors":"Takuro Nakao, M. Nakatani, Liwei Chan, K. Kunze","doi":"10.1145/2927929.2927938","DOIUrl":"https://doi.org/10.1145/2927929.2927938","url":null,"abstract":"This paper describes a design for smart glasses with a peripheral vision display. We show that users are able to perceive information from our device. We explore different animation patterns. The recognition rates for over 8 patterns are over 80 %. We also evaluate if users can recognize the patterns still while watching a video (for 5 patterns a recognition from the user of over 90 %), 7 users with in total over 637 patterns shown. In an first application case, we just focus on notification. Yet as related work shows, user interactions utilizing peripheral vision.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122535542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"POTEL: low cost realtime virtual pottery maker simulator","authors":"Juan Sebastian Muñoz Arango, C. Cruz-Neira","doi":"10.1145/2927929.2927949","DOIUrl":"https://doi.org/10.1145/2927929.2927949","url":null,"abstract":"In this paper we introduce an affordable system to model virtual pottery without the need to hold any device to interact with environment, just the user's plain hands. The system uses an Oculus VR for visualizing the world, a Leap Motion for interaction with the environment and an optional Arduino with celphone vibrators for haptic feedback. We do realtime clay deformation by extruding / compressing triangle vertices on a radius of influence and interact with the world by pressing buttons with the user's index finger. Finally the system can also 3D print the final pottery creation if the player wants.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116923857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Industry future & new uses","authors":"M. Pallot","doi":"10.1145/3257314","DOIUrl":"https://doi.org/10.1145/3257314","url":null,"abstract":"","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129653330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Lubos, G. Bruder, Oscar Ariza, Frank Steinicke
{"title":"Ambiculus: LED-based low-resolution peripheral display extension for immersive head-mounted displays","authors":"Paul Lubos, G. Bruder, Oscar Ariza, Frank Steinicke","doi":"10.1145/2927929.2927939","DOIUrl":"https://doi.org/10.1145/2927929.2927939","url":null,"abstract":"Peripheral vision in immersive virtual environments is important for application fields that require high spatial awareness and veridical impressions of three-dimensional spaces. Head-mounted displays (HMDs), however, use displays and optical elements in front of a user's eyes, which often do not natively support a wide field of view to stimulate the entire human visual field. Such limited visual angles are often identified as causes of reduced navigation performance and sense of presence. In this paper we present an approach to extend the visual field of HMDs towards the periphery by incorporating additional optical LED elements structured in an array, which provide additional low-resolution information in the periphery of a user's eyes. We detail our approach, technical realization, and present an experiment, in which we show that such far peripheral stimulation can increase subjective estimates of presence, and has the potential to change user behavior during navigation in a virtual environment.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125762724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of tangible interface geometry on task completion for learning and training","authors":"Matthieu Tessier, M. Ura, K. Miyata","doi":"10.1145/2927929.2927963","DOIUrl":"https://doi.org/10.1145/2927929.2927963","url":null,"abstract":"In this paper we analyze the impact of tangible interface geometry on the completion of a task. We focus on the changes in behavior for the user in terms of physical engagement and motion in space as well as emotional changes. This research is oriented toward educational environments for teaching purposes, such as schools and museums, but the core interaction and results of our experimentation could be applied to training.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Aurat, Laure Leroy, Olivier Hugues, P. Fuchs
{"title":"Improving shape perception in virtual reality systems using toed-in cameras","authors":"David Aurat, Laure Leroy, Olivier Hugues, P. Fuchs","doi":"10.1145/2927929.2927936","DOIUrl":"https://doi.org/10.1145/2927929.2927936","url":null,"abstract":"Nowadays, actual stereoscopic 3D renderers use two cameras to render images to the screen. These two cameras have parallel optical axis. It is well know that if cameras converge, then it produces distortions called vertical parallaxes. These distortions are supposed to stress visual system, but we do not know the effect on perception. In this article, we will test if these vertical parallaxes can improve shape perception. We found out that vertical parallaxes does improve shape perception, but the effect is decreased when the object is far from the user because when the cameras converge far away it get closer to a parallel configuration.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122087295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomoya Sasaki, M. Y. Saraiji, K. Minamizawa, M. Kitazaki, M. Inami
{"title":"Changing body ownership using visual metamorphosis","authors":"Tomoya Sasaki, M. Y. Saraiji, K. Minamizawa, M. Kitazaki, M. Inami","doi":"10.1145/2927929.2927961","DOIUrl":"https://doi.org/10.1145/2927929.2927961","url":null,"abstract":"This paper presents a study of using supernumerary arms experience in virtual reality applications. In this study, a system was developed that alternates user's body scheme and motion mapping in real-time when the user interacts with virtual contents. User arms and hands are tracked and mapped into several virtual arms instances that were generated from user's first point of view (FPV), and are deviated from his physical arms position at different angles. Participants reported a strong sense of body ownership toward the extra arms after interacting and holding virtual contents using them. Our finding is body ownership perception can be altered based on the condition used. Also, one interesting finding in this preliminary experiment is that the participants reported strong ownership toward the arm that actually is not holding the virtual object. This study contributes in the fields of augmented bodies, multi-limbs applications, as well as prosthetic limbs.","PeriodicalId":113875,"journal":{"name":"Proceedings of the 2016 Virtual Reality International Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132178655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}