{"title":"CuisiNavi: Cooking guiding system with gesture recognition","authors":"N. Idehara, Y. Idehara","doi":"10.1145/3234253.3234308","DOIUrl":"https://doi.org/10.1145/3234253.3234308","url":null,"abstract":"This system guides the user through the process of cooking by recognizing their actions without markers. Cooking requires complex information analysis of timing, especially when they want to cook several dishes in parallel and they should be served at once. The system reduces the difficulty of cooking beginners, and at the same time, encourages the elderly people to continue cooking by themselves. The system is consisted of two modules: gesture recognition module and timing management module. The gesture recognition module detects predefined user activities such as cutting, peeling, mixing. Those activities might be different from culture to culture, thus the module is implemented to study the activity so that it could adapt to it. The timing management module can read the cooking recipes that is coded in timing chart. When several dishes should be served at the same time, which is quite common in Japanese cooking, the module organizes the procedures in timely manner within the limitation of given number of ranges so that final dishes are ready on time. At the exhibition, the visitor is explained the recipes of three dishes and asked to serve them at once. The minimum possible time to finish them is also presented. They proceed the steps by their gestures and the progress animation is displayed on the cooking table with a projector.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116189604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yosuke Nakayama, K. Aoyama, Takashi Kitao, T. Maeda, H. Ando
{"title":"How to Use Multi-pole Galvanic Vestibular Stimulation for Virtual Reality Application","authors":"Yosuke Nakayama, K. Aoyama, Takashi Kitao, T. Maeda, H. Ando","doi":"10.1145/3234253.3234311","DOIUrl":"https://doi.org/10.1145/3234253.3234311","url":null,"abstract":"Galvanic Vestibular Stimulation (GVS) is a technique that induces virtual acceleration (or virtual head motion) by applying electrical current to electrodes placed on the bilateral mastoids. Since the vestibular sensation also closely reality of experience, it is a promising technique for virtual reality (VR) systems for presenting a highly realistic experience. However, the usual GVS can induce only lateral directional acceleration sensation. Thus, we invented four-pole GVS is able to induce multi directional virtual acceleration (i.e., lateral, anteroposterior, and yaw rotation). This method could realize the novel head set which can adding virtual motion sensation and virtual impacts sensation. In this paper, we explain two examples of new applications named \"GVS RIDE\" and \"Beaten by Virtual Character\" which gives a highly realistic experience using four-pole GVS and a Head Mounted Display (HMD) in synchronization.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134647799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rex Hsieh, M. Higashida, Yuya Mochizuki, Takaya Asano, Akihiko Shirai, Hisashi Sato
{"title":"MasQueRade: Onsite QR Code based VR Experience Evaluation System using Sanitary Mask","authors":"Rex Hsieh, M. Higashida, Yuya Mochizuki, Takaya Asano, Akihiko Shirai, Hisashi Sato","doi":"10.1145/3234253.3234315","DOIUrl":"https://doi.org/10.1145/3234253.3234315","url":null,"abstract":"The number of Virtual Reality applications has increased tremendously in the recent years to the point where every single digital entertainment company is investing heavily in VR systems. This increase in VR products demands the improvement in the evaluation of VR experience since current evaluations require an attendee per survey taker and can only move onto the next survey taker after the current survey is done. Traditional evaluations also require many evaluation machines if done digitally, costing survey takers unnecessary expenses. \"MasQueRade\" is a QR code based instant user feedback online system. This system allows users to scan the QR code on their VR sanitary masks and access an online evaluation system on their own mobile phones. This enables users to conduct the evaluation on their own free time and decreases the expenses surveyors have to spend on machines, therefore greatly decreases the manpower and time required to conduct the evaluations. While this approach to solving the issue of obtaining user feedback may sound elementary, the amount of efforts and resources \"MasQueRade\" saves by transferring the evaluation from a paper or digital form into an online database gives near infinite possibilities in the future of gathering feedback and evaluation. This paper seeks to explain the functions of \"MasQueRade\" and the results the team obtains during Anime Expo 2017 and propose a real-time live user VR commentary system drawing inputs form the attendees.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125890844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Immersive Interfaces for Engagement and Learning: Cognitive Implications","authors":"J. Dinet, M. Kitajima","doi":"10.1145/3234253.3234301","DOIUrl":"https://doi.org/10.1145/3234253.3234301","url":null,"abstract":"Immersive Virtual Environments are distinct from other types of multimedia learning environments. But, if immersion defined the subjective impression that one is participating in a comprehensive and a realistic experience, immersive-ness is generally defined only from a systemic point of view (e.g., capacity to track users' movements, facial expressions and gestures, quality of appearance, combination of multi-sensory information, design of the virtual world). Moreover, nowadays, it does not exist a robust theoretical framework to describe and to predict immersive-ness from a user-point of view. So this paper is aiming to assume that (a) immersive-ness should be defined from a cognitive user point of view, and that (b) the cognitive architecture called MHP/RT (for Model Human Processor with Realtime Constraints) is relevant to understand and to predict immersive-ness. After a presentation of the MHP/RT model and the distributed memory system related to conscious and unconscious processes, we present the conditions necessary to produce an \"immersive experience\" for the user, and a case study is described as an example. Theoretical and methodological perspectives are discussed.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123748361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temari and Shadow: an interactive installation combining virtual handicraft and real shadow","authors":"Noriyasu Obushi, M. Koshino","doi":"10.1145/3234253.3234321","DOIUrl":"https://doi.org/10.1145/3234253.3234321","url":null,"abstract":"To provide cultural night experiences with tourists in Kanazawa City, we exhibited our interactive digital installation titled \"Temari and Shadow\" at the open space of the 21st Century Museum of Contemporary Art, Kanazawa. Our installation utilizes shadows cast by people standing in front of a projector. We chose Kanazawa's Kaga Temari, a hand-sized colorful toy ball made of thread, as a motif of this work. Our shadow-based system achieved 60 fps in rendering and we estimate that the time required for the response of a shadow is 133 ms. However, the latency can be shortened by a projector with a higher refresh rate. The result of the questionnaire at the exhibition suggests that our system is widely accepted by the people of various ages.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123498804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdul Rahman Abdel Razek, Christian van Husen, M. Pallot, S. Richir
{"title":"A Comparative Study on Conventional versus Immersive Service Prototyping (VR, AR, MR)","authors":"Abdul Rahman Abdel Razek, Christian van Husen, M. Pallot, S. Richir","doi":"10.1145/3234253.3234296","DOIUrl":"https://doi.org/10.1145/3234253.3234296","url":null,"abstract":"Product prototyping, through the use of immersive technologies, has demonstrated its huge potential enabling co-creative exploration of different usage scenarios and evaluation of the User eXperience. It is already an extremely relevant and valuable activity in many industries and revealed as an essential element of experience design. Service prototyping is a new prominent progressive process used within service innovation intended to improve the service experience and quality while accelerating the service development process. Different types of service prototypes can be used to encompass all the different service elements throughout the service design and engineering processes. This paper presents a comparative study between the conventional and immersive service prototyping This comparison encompasses application, advantages and disadvantages of these different service prototyping. Several use cases of immersive service prototyping, either based on Virtual, Augmented or Mixed Reality technologies, are presented. This study aims to improve the body of knowledge on the use of immersive service prototyping. This is intended to help service designer understand what can be done with immersive service prototyping, and increase awareness on service prototyping. The main objective is to provide a guidance to service designers for selecting the most appropriate immersive service prototyping techniques per each case specificity.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122253458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Contextual Semantic Interaction Interface for Virtual Reality Environments","authors":"Jacek Sokołowski, K. Walczak","doi":"10.1145/3234253.3234300","DOIUrl":"https://doi.org/10.1145/3234253.3234300","url":null,"abstract":"Non-trivial virtual environments often require implementation of complex forms of user interaction, including spatial navigation within the whole environment as well as various forms of interaction with particular components of the environment. Dynamic environments, in which content or context changes in response to user interaction, raise serious problems to both content designers and users. Often in-scene interaction elements are used, but they impede perception of the virtual environment, especially in multi-user scenarios, in which majority of users are passive observers. In this paper, a new approach to navigation and interaction with three-dimensional virtual environments is proposed. This approach uses semantic metadata attached to particular components of the environment to build a dynamic interaction interface on a user's handheld mobile device. The interface is built in a contextual manner, enabling a user to interact with the virtual environment with full awareness of the possible actions. Prototypes of the server and the client modules have been implemented using the Unity engine, and the system has been tested on a large-size Powerwall setup.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132085353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"InVRsion: Soaring through pixels","authors":"Vincent Rieuf, A. Solignac","doi":"10.1145/3234253.3234313","DOIUrl":"https://doi.org/10.1145/3234253.3234313","url":null,"abstract":"This work presents the design of a virtual training platform. This platform is specifically designed for tasks requiring a balance between theoretical and embodied knowledge. Our use case aims at training paragliding pilots to various aspects of pendular flight. Complex dynamics, invisible risks, data scarce environment and a practice oriented task, makes paragliding a perfect ambassador for various challenging tasks. This design run aims at identifying a generic methodology for virtual training platform design. This work achieved through a partnership with the French Federation of Free Flight (FFVL) and the paragliding wing designer and producer Sup'Air. Our goal is to compete at ReVolution 2018 -- Laval Virtual and present an original, fun and pedagogic experience to the general public in order to validate the genericity of our methodology.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114605516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicklas H. Christensen, Oliver G. Hjermitslev, Niclas H. Stjernholm, F. Falk, Atanas Nikolov, M. Kraus, J. Poulsen, Jane Petersson
{"title":"Feasibility of Team Training in Virtual Reality for Robot-Assisted Minimally Invasive Surgery","authors":"Nicklas H. Christensen, Oliver G. Hjermitslev, Niclas H. Stjernholm, F. Falk, Atanas Nikolov, M. Kraus, J. Poulsen, Jane Petersson","doi":"10.1145/3234253.3234295","DOIUrl":"https://doi.org/10.1145/3234253.3234295","url":null,"abstract":"The rate of evolution of surgical robotics has continuously increased since its inception. Training and experience with the robots is a key factor to successful operations but training is time-consuming and expensive, as the equipment used is expensive and has a short life-time. To explore the possibility of enabling alternative training methods for robot assisted surgery, we designed a multi-user virtual reality simulation of team training as practised in certified institutes and conducted an expert review in cooperation with a training centre for minimally invasive surgery, first nurse assistant and nurse specialist in robot surgery Jane Petersson and head surgeon at Aalborg University Hospital Johan Poulsen. To this end, a contextual study was conducted to ensure realism and accuracy of the simulation. The experts were positive about the system's future, however it was not considered sufficiently complete for use in actual surgery training at this stage. More scenarios and features would be required in future implementations to allow for near full training sessions to be performed in virtual reality.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Reality Simulator for Construction workers","authors":"Mehdi Hafsia, É. Monacelli, Hugo Martin","doi":"10.1145/3234253.3234298","DOIUrl":"https://doi.org/10.1145/3234253.3234298","url":null,"abstract":"Construction industries represent the worst industries regarding injury and fatality statistics due to security and health issues. In this field, if not performed correctly, the tasks have risk of causing work related pains or cause death. Currently training sessions are available to train construction workers and are based on two different workshops: Theoretical and practical sessions. However, the risk of injury is still present in the current practical sessions. Therefore, virtual reality might offer a risk free training and raise awareness of health and safety issues amongst construction workers on the workfield. In this paper, we describe the application developed to train construction workers on the operating mode of a formwork panel and the stabilization task in particular and the tests conducted with various Bouygues Construction experts. First results showed that the application was conclusive and cost efficient. In addition the employees assessed the application as innovative, relevant and useful. In conclusion, virtual reality could be of added value in the training process to reduce safety and health issues in the construction industry.","PeriodicalId":137787,"journal":{"name":"Proceedings of the Virtual Reality International Conference - Laval Virtual","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122699650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}