{"title":"Collaborating & being together: influence of screen size and viewing distance during video communication","authors":"Virginie Dagonneau, Elise Martin, M. Cosquer","doi":"10.1145/2617841.2620717","DOIUrl":"https://doi.org/10.1145/2617841.2620717","url":null,"abstract":"Through videoconferencing, people search to interact and communicate with their remote friends or family as they were together in the same place. The influence of form variables such as screen size has been mainly investigated on the sense of physical presence (presence as transportation) in virtual environment and in television area, but less attention has been paid to how these factors can influence the sense of co-presence in videoconferencing. In addition, preferred viewing distance is well known to be a key parameter in order to convey a sense of presence and enjoyment when people watch television, but there is currently no data regarding preferred viewing distance in videoconferencing. This paper presents a user study which explores the influence of screen size on the participant's sense of co-presence and on their preferred viewing distance. The main results of this study revealed that users preferred to get closer to the screen when they communicated in videoconferencing than when they watched TV program. Our study suggests that screen size has an effect on the preferred viewing distance and on the participant's sense of co-presence, with higher scores of co-presence with larger screen.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122109904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive 3D subdomaining using adaptive FEM based on solutions to the dual problem","authors":"H. Graf, M. Larson, A. Stork","doi":"10.1145/2617841.2620696","DOIUrl":"https://doi.org/10.1145/2617841.2620696","url":null,"abstract":"This paper presents a new technique for automatic, interactive 3D subdomaining coupled to mesh and simulation refinements in order to enhance local resolutions of CAE domains. Numerical simulations have become crucial during the product development process (PDP) for predicting different properties of new products as well as the simulation of various kinds of natural phenomena. \"What-if-scenarios\" and conceptual changes to either the boundary or the domain are time consuming and cost intensive. Most of the time, engineers are interested in a deeper understanding of local quantities rather than being exposed to an iterative re-simulation of the overall domain. New techniques for automatic and interactive processes are then challenged by the cardinality and structural complexity of the CAE domain. This paper introduces a new interactive technique that automatically reduces the analysis space, and allows engineers to enhance the resolution of local problems without a need for recalculating the global problem. The technique, integrated into a VR based front end, achieves faster reanalysis cycles compared with traditional COTS tool chains and engineering workflows.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130236384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Pacheco, Sytse Wierenga, P. Omedas, Stefan Wilbricht, H. Knoch, P. Verschure
{"title":"Spatializing experience: a framework for the geolocalization, visualization and exploration of historical data using VR/AR technologies","authors":"Daniel Pacheco, Sytse Wierenga, P. Omedas, Stefan Wilbricht, H. Knoch, P. Verschure","doi":"10.1145/2617841.2617842","DOIUrl":"https://doi.org/10.1145/2617841.2617842","url":null,"abstract":"In this study we present a novel ICT framework for the exploration and visualization of historical information using Augmented Reality (AR) and geolocalization. The framework facilitates the geolocalization of multimedia files, as well as their later retrieval and visualization through an AR paradigm in which a virtual reconstruction is matched to user's positions and viewing angle. The main objective of the architecture is to enhance human-data interaction with cultural heritage content in outdoor settings and generate more engaging and profound learning experiences by exploiting information spatialization and sequencing strategies.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126648642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigation and interaction in a real-scale digital mock-up using natural language and user gesture","authors":"M. Mirzaei, J. Chardonnet, F. Mérienne, A. Genty","doi":"10.1145/2617841.2620716","DOIUrl":"https://doi.org/10.1145/2617841.2620716","url":null,"abstract":"This paper tries to demonstrate a very new real-scale 3D system and sum up some firsthand and cutting edge results concerning multi-modal navigation and interaction interfaces. This work is part of the CALLISTO-SARI collaborative project. It aims at constructing an immersive room, developing a set of software tools and some navigation/interaction interfaces. Two sets of interfaces will be introduced here: 1) interaction devices, 2) natural language (speech processing) and user gesture. The survey on this system using subjective observation (Simulator Sickness Questionnaire, SSQ) and objective measurements (Center of Gravity, COG) shows that using natural languages and gesture-based interfaces induced less cyber-sickness comparing to device-based interfaces. Therefore, gesture-based is more efficient than device-based interfaces.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114066891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sophia Li, Yazhou Huang, V. Tri, Johan Elvek, Samuel Wan, Jan Kjallstrom, Nils Andersson, Mats Johansson, D. Lejerskar
{"title":"Interactive theater-sized dome design for edutainment and immersive training","authors":"Sophia Li, Yazhou Huang, V. Tri, Johan Elvek, Samuel Wan, Jan Kjallstrom, Nils Andersson, Mats Johansson, D. Lejerskar","doi":"10.1145/2617841.2620693","DOIUrl":"https://doi.org/10.1145/2617841.2620693","url":null,"abstract":"In this work we present a novel design for theater-sized interactive fulldome system, the EONVision Idome. For edutainment, it combines Hollywood style story telling with cutting edge technologies, providing a group of audience the feast of immersive 4D theater experience. For industrial training, the fully immersive and interactive dome delivers real-time VR tutorials for enhanced training experience. Compared with traditional VR training facilities like CAVE and EON Icube, it hosts larger group of trainees for lower training cost per capita, and provides the means to conduct collaborative training with multiple trainees.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126555362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florin Octavian Matu, M. Thøgersen, Bo Galsgaard, Martin Jensen, M. Kraus
{"title":"Stereoscopic augmented reality system for supervised training on minimal invasive surgery robots","authors":"Florin Octavian Matu, M. Thøgersen, Bo Galsgaard, Martin Jensen, M. Kraus","doi":"10.1145/2617841.2620722","DOIUrl":"https://doi.org/10.1145/2617841.2620722","url":null,"abstract":"Training in the use of robot-assisted surgery systems is necessary before a surgeon is able to perform procedures using these systems because the setup is very different from manual procedures. In addition, surgery robots are highly expensive to both acquire and maintain --- thereby entailing the need for efficient training. When training with the robot, the communication between the trainer and the trainee is limited, since the trainee often cannot see the trainer. To overcome this issue, this paper proposes an Augmented Reality (AR) system where the trainer is controlling two virtual robotic arms. These arms are virtually superimposed on the video feed to the trainee, and can therefore be used to demonstrate and perform various tasks for the trainee. Furthermore, the trainer is presented with a 3D image through a stereoscopic display. Because of the added depth perception, this enables the trainer to better guide and help the trainee. A prototype has been developed using low-cost materials and the system has been evaluated by surgeons at Aalborg University Hospital. User feedback indicated that a 3D display for the trainer is very useful as it enables the trainer to better monitor the procedure, and thereby enhances the training experience. The virtual overlay was also found to work as a good and illustrative approach for enhanced communication. However, the delay of the prototype made it difficult to use for actual training.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"347 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123401319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Betella, Enrique Martínez Bueno, Wipawee Kongsantad, R. Zucca, X. Arsiwalla, P. Omedas, P. Verschure
{"title":"Understanding large network datasets through embodied interaction in virtual reality","authors":"Alberto Betella, Enrique Martínez Bueno, Wipawee Kongsantad, R. Zucca, X. Arsiwalla, P. Omedas, P. Verschure","doi":"10.1145/2617841.2620711","DOIUrl":"https://doi.org/10.1145/2617841.2620711","url":null,"abstract":"The intricate web of information we generate nowadays is more massive than ever in the history of mankind. The sheer enormity of big data makes the task of extracting semantic associations out of complex networks more complicated. Stemming this \"data deluge\" calls for novel unprecedented technologies. In this work, we engineered a system that enhances a user's understanding of large datasets through embodied navigation and natural gestures. This system constitutes an immersive virtual reality environment called the \"eXperience Induction Machine\" (XIM). One of the applications that we tested using our system is the exploration of the human connectome: the network of nodes and connections that underlie the anatomical architecture of the human brain. As a comparative validation of our technology, we then exposed participants to a connectome dataset using both our system and a state-of-the-art software for visualization and analysis of the same network. We systematically measured participants' understanding and visual memory of the connectomic structure. Our results showed that participants retained more information about the structure of the network when using our system. Overall, our system constitutes a novel approach in the exploration and understanding of large complex networks.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tatsuya Kodera, Naoto Tani, Jun Morita, Naoya Maeda, Kazuna Tsuboi, Motoko Kanegae, Y. Shinozuka, S. Shimamura, Kadoki Kubo, Yusuke Nakayama, Jaejun Lee, Maxime Pruneau, H. Saito, M. Sugimoto
{"title":"Virtual rope slider","authors":"Tatsuya Kodera, Naoto Tani, Jun Morita, Naoya Maeda, Kazuna Tsuboi, Motoko Kanegae, Y. Shinozuka, S. Shimamura, Kadoki Kubo, Yusuke Nakayama, Jaejun Lee, Maxime Pruneau, H. Saito, M. Sugimoto","doi":"10.1145/2617841.2620725","DOIUrl":"https://doi.org/10.1145/2617841.2620725","url":null,"abstract":"This paper proposes \"Virtual Rope Slider\", which expands a rope sliding experience by stimulating sense of sight, hearing, wind and vestibular sensation. A rope slide in a real world has physical restrictions in terms of scale and location whereas our \"Virtual Rope Slider\" provides scale and location independent experiences in the virtual environment. The user is able to perceive a different sense of scale in the virtualized scenes by multi-modal stimulation with physical simulation.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134121997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3DCG art expression on a tablet device using integral photography","authors":"Nahomi Maki, Akihiko Shirai, K. Yanaka","doi":"10.1145/2617841.2620708","DOIUrl":"https://doi.org/10.1145/2617841.2620708","url":null,"abstract":"In conventional three-dimensional computer graphic (3DCG) technologies, a rendered image is two-dimensional. No information except that seen from a single viewpoint is included in the rendered image, although 3D models are used to construct a 3D scene. Even when rendering for binocular stereopsis is performed, the viewpoints are only two. This characteristic is a limitation of the conventional 3DCG expression. In this study, we propose a new approach to 3DCG art expression. Our system uses integral photography (IP) consisting of a tablet device and a fly's eye lens. Stereoscopy is possible without the need for the user to wear special glasses even if the device is placed in any orientation because parallax is caused not only in a horizontal position but also in all directions in IP. A ready-made fly's eye lens can be combined with various tablet devices that have different screen resolutions because the extended fractional view method is used. The device is so small and lightweight that users can appreciate 3D arts at any time and place. Notably, IP can reproduce such glittering effects because each minute convex lens of the IP display emits light in different directions. We produced a 3DCG artwork, called \"Frozen Time,\" that fully employs the characteristics of our technology in motifs of \"floating ice,\" \"crystallized fossils,\" and an \"opal flower.\"","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125753153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robyn Taylor, Tom Bartindale, Qasim Chaudhry, Philip Heslop, J. Bowers, Peter C. Wright, P. Olivier
{"title":"\"Which brew are you going to choose?\": an interactive 'tea-decider-er' in a teahouse shop window","authors":"Robyn Taylor, Tom Bartindale, Qasim Chaudhry, Philip Heslop, J. Bowers, Peter C. Wright, P. Olivier","doi":"10.1145/2617841.2620703","DOIUrl":"https://doi.org/10.1145/2617841.2620703","url":null,"abstract":"We describe the design of an interactive shop window created and installed for use in an independent teahouse. Using cameras to track the gestures of customers on the street front, the system allowed visitors to interact with an animatronic character who helped them choose a 'brew' from over 80 unusual tea varieties. In this paper we describe how we worked with the business owners, observing their practices to develop an understanding of how they helped customers choose one tea out of a large array of appealing possibilities. We describe the design process we undertook when creating the window, and examine the functional, aesthetic, technical and commercial factors that pose challenges when creating a bespoke piece of interactive art for a functioning real-world business.","PeriodicalId":128331,"journal":{"name":"Proceedings of the 2014 Virtual Reality International Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115505266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}