{"title":"Assessing the Effects of Orientation and Device on 3D Positioning","authors":"Robert J. Teather, W. Stuerzlinger","doi":"10.1109/VR.2008.4480807","DOIUrl":"https://doi.org/10.1109/VR.2008.4480807","url":null,"abstract":"We present two studies to assess which physical factors of various input devices influence 3D object movement tasks. In particular, we evaluate the factors that seem to make the mouse a good input device for constrained 3D movement tasks. The first study examines the effect of a supporting surface across orientation of input device movement and display orientation. Surprisingly, no significant results were found for the effect of physical support for constrained movement techniques. Also, no significant difference was found between matching the orientation of the display to that of the input device movement. A second study found that the mouse outperformed all tracker conditions for speed, but the presence or absence of support had no significant effect when tracker movement is constrained to 2D.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125006623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MIRELA: A Language for Modeling and Analyzing Mixed Reality Applications Using Timed Automata","authors":"Jean-Yves Didier, Bachir Djafri, Hanna Klaudel","doi":"10.1109/VR.2008.4480785","DOIUrl":"https://doi.org/10.1109/VR.2008.4480785","url":null,"abstract":"We propose a compositional modeling framework for mixed reality (MR) software architectures in order to express, simulate and validate formally the time depending properties of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole application is then obtained as a composition of such defined components. To ease writing specifications, we propose a textual language (named MIRELA: mixed reality language) along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints, and which also may be used to generate source code skeletons for an implementation on a MR platform.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130431760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HOG on a WIM","authors":"A. Stafford, W. Piekarski, B. Thomas","doi":"10.1109/VR.2008.4480805","DOIUrl":"https://doi.org/10.1109/VR.2008.4480805","url":null,"abstract":"This paper presents a new interaction metaphor for mixed space collaboration: HOG on a WIM. Hand of god (HOG) on a world in miniature (WIM) is the first collaborative WIM. It enables a table-top display user to collaborate with a virtual reality (VR) user. The tabletop display user has a god's eye view of the virtual world and communicates with the VR user through natural gestures and speech. The VR user controls a WIM to navigate and manipulate the virtual world with the aid of the tabletop display user's guidance.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"89 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120852939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An initial study into augmented inward looking exploration and navigation in CAVE-like IPT systems","authors":"R. Aspin","doi":"10.1109/VR.2008.4480783","DOIUrl":"https://doi.org/10.1109/VR.2008.4480783","url":null,"abstract":"This research presents and evaluates a CAVE-like IPT system in which a tracked display and interaction device (TDID), based on a tablet PC, is used to display an augmented viewing system for detailed examination of a close focused objects. This maintains wider context, displayed on the IPT display environment, thereby enabling effective wayfinding and navigation, while still enabling detailed examination of the region of interest on a position sensitive high-resolution display (TDID). Evaluation against an un- augmented CAVE-like IPT system, demonstrates the effectiveness of this approach in enabling users to both explore around the model effectively resolve fine detail.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124375366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shun Yamamoto, Hidekazu Tamaki, Yuta Okajima, Ken-ichi Okada, Yuichi Bannai
{"title":"Symmetric Model of Remote Collaborative MR Using Tangible Replicas","authors":"Shun Yamamoto, Hidekazu Tamaki, Yuta Okajima, Ken-ichi Okada, Yuichi Bannai","doi":"10.1109/VR.2008.4480753","DOIUrl":"https://doi.org/10.1109/VR.2008.4480753","url":null,"abstract":"Research into collaborative mixed reality (MR) or augmented reality has recently been active. Previous studies showed that MR was preferred for collocated collaboration while immersive virtual reality was preferred for remote collaboration. The main reason for this preference is that the physical object in remote space cannot be handled directly. However, MR using tangible objects is still attractive for remote collaborative systems, because MR enables seamless interaction with real objects enhanced by virtual information with the sense of touch. Here we introduce \"tangible replicas\"(dual objects that have the same shape, size, and surface), and propose a symmetrical model for remote collaborative MR. The result of experiments shows that pointing and drawing functions on the tangible replica work well despite limited shared information.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123792859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Adam Jones, J. Edward Swan, Gurjot Singh, E. Kolstad, Stephen R. Ellis
{"title":"The Effects of Virtual Reality, Augmented Reality, and Motion Parallax on Egocentric Depth Perception","authors":"J. Adam Jones, J. Edward Swan, Gurjot Singh, E. Kolstad, Stephen R. Ellis","doi":"10.1145/1394281.1394283","DOIUrl":"https://doi.org/10.1145/1394281.1394283","url":null,"abstract":"A large number of previous studies have shown that egocentric depth perception tends to be underestimated in virtual reality (VR) - objects appear smaller and farther away than they should. Various theories as to why this might occur have been investigated, but to date the cause is not fully understood. A much smaller number of studies have investigated how depth perception operates in augmented reality (AR), and some of these studies have also indicated a similar underestimation effect. In this paper we report an experiment that further investigates these effects. The experiment compared VR and AR conditions to two real-world control conditions, and studied the effect of motion parallax across all conditions. Our combined VR and AR head-mounted display (HMD) allowed us to develop very careful calibration procedures based on real-world calibration widgets, which cannot be replicated with VR-only HMDs. To our knowledge, this is the first study to directly compare VR and AR conditions as part of the same experiment.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114338814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thies Pfeiffer, Marc Erich Latoschik, I. Wachsmuth
{"title":"Conversational Pointing Gestures for Virtual Reality Interaction: Implications from an Empirical Study","authors":"Thies Pfeiffer, Marc Erich Latoschik, I. Wachsmuth","doi":"10.1109/VR.2008.4480801","DOIUrl":"https://doi.org/10.1109/VR.2008.4480801","url":null,"abstract":"Interaction in conversational interfaces strongly relies on the system's capability to interpret the user's references to objects via deictic expressions. Deictic gestures, especially pointing gestures, provide a powerful way of referring to objects and places, e.g., when communicating with an embodied conversational agent in a virtual reality environment. We highlight results drawn from a study on pointing and draw conclusions for the implementation of pointing-based conversational interactions in partly immersive virtual reality.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114367068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Faeth, Michael A. Oren, Jonathan Sheller, Sean Godinez, C. Harding
{"title":"Cutting, Deforming and Painting of 3D meshes in a Two Handed Viso-haptic VR System","authors":"A. Faeth, Michael A. Oren, Jonathan Sheller, Sean Godinez, C. Harding","doi":"10.1109/VR.2008.4480776","DOIUrl":"https://doi.org/10.1109/VR.2008.4480776","url":null,"abstract":"We describe M4, the multi-modal mesh manipulation system, which aims to provide a more intuitive desktop interface for freeform manipulation of 3D meshes. The system combines interactive 3D graphics with haptic force feedback and provide several virtual tools for the manipulation of 3D objects represented by irregular triangle meshes. The current functionality includes mesh painting with pressure dependent brush size and paint preview, mesh cutting via drawing a poly-line on the model and two types of mesh deformations. We use two phantoms, either in a co-located haptic/3D-stereo setup or as a fish tank VR setup with a 3D flat panel. In our system, the second hand assists the manipulation of the object, either by \";holding\"; the mesh or by affecting the manipulation directly. While the connection of 3D artists and designers to such a direct interaction system may be obvious, we are also investigating its potential benefits for landscape architects and other users of spatial geoscience data. Feedback from an upcoming user study will evaluate the benefits of this system and its tools for these different user groups.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121421221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Gerbaud, N. Mollet, Franck Ganier, B. Arnaldi, J. Tisseau
{"title":"GVT: a platform to create virtual environments for procedural training","authors":"S. Gerbaud, N. Mollet, Franck Ganier, B. Arnaldi, J. Tisseau","doi":"10.1109/VR.2008.4480778","DOIUrl":"https://doi.org/10.1109/VR.2008.4480778","url":null,"abstract":"The use of virtual environments for training is strongly stimulated by important needs for training on sensitive equipments. Yet, developing such an application is often done without reusing existing components, which requires a huge amount of time. We present in this paper a full authoring platform to facilitate the development of both new virtual environments and pedagogical information for procedural training. This platform, named GVT (generic virtual training) relies on innovative models and provides authoring tools which allow capitalizing on the developments realized. We present a generic model named STORM, used to describe reusable behaviors for 3D objects and reusable interactions between those objects. We also present a scenario language named LORA which allows non computer scientists to author various and complex sequences of tasks in a virtual scene. Based on those models, as an industrial validation with Nexter-Group, more than fifty operational scenarios of maintenance training on military equipments have been realized so far. We have also set up an assessment campaign, and we expose in this paper the first results which show that GVT enables trainees to learn procedures efficiently. The platform keeps on evolving and training on collaborative procedures will soon be available.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128630989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile Group Dynamics in Large-Scale Collaborative Virtual Environments","authors":"Trevor J. Dodds, R. Ruddle","doi":"10.1109/VR.2008.4480751","DOIUrl":"https://doi.org/10.1109/VR.2008.4480751","url":null,"abstract":"We have developed techniques called mobile group dynamics (MGDs), which help groups of people to work together while they travel around large-scale virtual environments. MGDs explicitly showed the groups that people had formed themselves into, and helped people move around together and communicate over extended distances. The techniques were evaluated in the context of an urban planning application, by providing one batch of participants with MGDs and another with an interface based on conventional collaborative virtual environments (CVEs). Participants with MGDs spent nearly twice as much time in close proximity (within 10m of their nearest neighbor), communicated seven times more than participants with a conventional interface, and exhibited real-world patterns of behavior such as staying together over an extended period of time and regrouping after periods of separation. The study has implications for CVE designers, because it shows how MGDs improves groupwork in CVEs.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129740927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}