Luciano P. Soares, J. Jorge, J. M. Dias, A. Raposo, B. Araújo, Leonel Valbom, Filipe Gaspar
{"title":"Title of tutorial: Designing immersive VR systems: From bits to bolts","authors":"Luciano P. Soares, J. Jorge, J. M. Dias, A. Raposo, B. Araújo, Leonel Valbom, Filipe Gaspar","doi":"10.1109/VR.2012.6180947","DOIUrl":"https://doi.org/10.1109/VR.2012.6180947","url":null,"abstract":"Immersive 3D Virtual Environments (VE) have become affordable for many research centers. However, a complete solution needs several integration steps to be fully operational. Some of these steps are difficult to accomplish and require an uncommon combination of different skills. This tutorial presents the most recent techniques developed to address this problem, from displays to software tools. The hardware in a typical VR installations combines projectors, screens, speakers, computers, tracking and I/O devices. The tutorial will discuss hardware options, explaining their advantages and disadvantages. We will cover design decisions from basic software and hardware design, through user tracking, multimodal human-computer interfaces and acoustic rendering, to how to administrate the whole solution. Additionally, we will provide an introduction to existing tracking technologies, explaining how the most common devices work, while focusing on infrared optical tracking. Finally, we briefly cover integration software and middleware developed for most VE settings.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128888423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VR 2012 tutorial: Quantitative and qualitative methods for human-subject experiments in Virtual and Augmented Reality","authors":"Joseph L. Gabbard, J. Swan, S. Ellis","doi":"10.1109/VR.2012.6180945","DOIUrl":"https://doi.org/10.1109/VR.2012.6180945","url":null,"abstract":"This tutorial is for researchers and engineers, working in the field of Virtual Reality (VR) and Augmented Reality (AR), who wish to conduct user-based experiments with a specific aim of promoting both traditional quantitative human-subject experiments and qualitative methods for assessing usability. This tutorial is for a full-day tutorial presenting both quantitative and qualitative approaches to conducting human-subject experiments. It covers (1) the basic principles of experimental design and analysis, with an emphasis on human-subject experiments in VR/AR; (2) qualitative studies (e.g., formative evaluation methods) for assessing and improving VR/AR user interfaces and user interaction along with lessons learned from conducting many user-based studies; and (3) a “journalistic approach” to measuring human performance that organizes the activity around questions such as “Who? What? When? Where? How? and Why?”.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128366980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Floating integral photography using Fresnel mirror","authors":"K. Yanaka, M. Yoda, T. Iizuka","doi":"10.1109/VR.2012.6180918","DOIUrl":"https://doi.org/10.1109/VR.2012.6180918","url":null,"abstract":"We previously developed an integral photography (IP) system in which a 3D image with both horizontal and vertical parallax looks as if it is floating in the air, and we have now expanded it so that animated images can be displayed. No special glasses are necessary with this system, which consists of a 3D display subsystem and a Fresnel mirror. The light emitted from the 3D display subsystem is reflected by the Fresnel mirror, and the image of a floating object is then formed in space.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"115 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using virtual reality technology in linguistic research","authors":"Thies Pfeiffer","doi":"10.1109/VR.2012.6180893","DOIUrl":"https://doi.org/10.1109/VR.2012.6180893","url":null,"abstract":"In this paper, we argue that empirical research on genuine linguistic topics, such as on the production of multimodal utterances in the speaker and the interpretation of the multimodal signals in the interlocutor, can greatly benefit from the use of virtual reality technologies. Established methodologies for research on multimodal interactions, like the presentation of pre-recorded 2D videos of interaction partners as stimuli and the recording of interaction partners using multiple 2D video cameras have crucial shortcomings regarding ecological validity and the precision of measurements that can be achieved. In addition, these methodologies enforce restrictions on the researcher. The stimuli, for example, are not very interactive and thus not as close to natural interactions as ultimately desired. Also, the analysis of 2D video recordings requires intensive manual annotations, often frame-by-frame, which negatively affects the feasible number of interactions which can be included in a study. The technologies bundled under the term virtual reality offer exciting possibilities for the linguistic researcher: gestures can be tracked without being restricted to fixed perspectives, annotation can be done on large corpora (semi-)automatically and virtual characters can be used to produce specific linguistic stimuli in a repetitive but interactive fashion. Moreover, immersive 3D visualizations can be used to recreate a simulation of the recorded interactions by fusing the raw data with theoretic models to support an iterative data-driven development of linguistic theories. This paper discusses the potential of virtual reality technologies for linguistic research and provides examples for the application of the methodology.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122915487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NuNav3D: A touch-less, body-driven interface for 3D navigation","authors":"C. Papadopoulos, D. Sugarman, A. Kaufman","doi":"10.1109/VR.2012.6180885","DOIUrl":"https://doi.org/10.1109/VR.2012.6180885","url":null,"abstract":"We introduce NuNav3D, a body-driven 3D navigation interface for large displays and immersive scenarios. While 3D navigation is a core component of VR applications, certain situations, like remote displays in public or large visualization environments, do not allow for using a navigation controller or prop. NuNav3D maps hand motions, obtained from a pose recognition framework which is driven by a depth sensor, to a virtual camera manipulator, allowing for direct control of 4 DOFs of navigation. We present the NuNav3D navigation scheme and our preliminary user study results under two scenarios, a path-following case with tight geometrical constraints and an open space exploration case, while comparing our method against a traditional joypad controller.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128105419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cha Lee, Steffen Gauglitz, Tobias Höllerer, D. Bowman
{"title":"Examining the equivalence of simulated and real AR on a visual following and identification task","authors":"Cha Lee, Steffen Gauglitz, Tobias Höllerer, D. Bowman","doi":"10.1109/VR.2012.6180890","DOIUrl":"https://doi.org/10.1109/VR.2012.6180890","url":null,"abstract":"Mixed Reality (MR) simulation, in which a Virtual Reality (VR) system is used to simulate both the real and virtual components of an Augmented Reality (AR) system, has been proposed as a method for evaluating AR systems with greater levels of experimental control. However, factors such as the latency of the MR simulator may impact the validity of experimental results obtained with MR simulation. We present a study evaluating the effects of simulator latency on the equivalence of results from an MR simulator and a real AR system. We designed an AR experiment which required the participants to visually follow a virtual pipe around a small room filled with real targets and to find and identify the targets which were intersected by the pipe. We show that, with a 95% confidence interval, the results from all three simulated AR conditions fall well within one standard deviation of the real AR case.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121726572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Nickel, Hugh Kinsey, H. Haack, Mykel Pendergrass, T. Barnes
{"title":"Interval training with Astrojumper","authors":"A. Nickel, Hugh Kinsey, H. Haack, Mykel Pendergrass, T. Barnes","doi":"10.1109/VR.2012.6180931","DOIUrl":"https://doi.org/10.1109/VR.2012.6180931","url":null,"abstract":"The prevalence of obesity among adolescents and adults in the U.S. is a matter of concern. Exercise video games reach a wide audience and can be used to motivate increased physical activity. We have previously developed Astrojumper, an exergame exploring game mechanics that provide a fun experience and effective exercise, and have now developed a new version of Astrojumper that supports interval training through additional mechanics. We believe the new version will improve upon the first in player motivation, enjoyment and replayability, and also in the level of physical challenge the game affords its players.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124982506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a linguistically motivated model for selection in virtual reality","authors":"Thies Pfeiffer","doi":"10.1109/VR.2012.6180896","DOIUrl":"https://doi.org/10.1109/VR.2012.6180896","url":null,"abstract":"Swiftness and robustness of natural communication is tied to the redundancy and complementarity found in our multimodal communication. Swiftness and robustness of human-computer interaction (HCI) is also a key to the success of a virtual reality (VR) environment. The interpretation of multimodal interaction signals has therefore been considered a high goal in VR research, e.g. following the visions of Bolt's put-that-there in 1980 [1]. It is our impression that research on user interfaces for VR systems has been focused primarily on finding and evaluating technical solutions and thus followed a technology-oriented approach to HCI. In this article, we argue to complement this by a human-oriented approach based on the observation of human-human interaction. The aim is to find models of human-human interaction that can be used to create user interfaces that feel natural. As the field of Linguistics is dedicated to the observation and modeling of human-human communication, it could be worthwhile to approach natural user interfaces from a linguistic perspective. We expect at least two benefits from following this approach. First, the human-oriented approach substantiates our understanding of natural human interactions. Second, it brings about a new perspective by taking the interaction capabilities of a human addressee into account, which are not often explicitly considered or compared with that of the system. As a consequence of following both approaches to create user interfaces, we expect more general models of human interaction to emerge.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130549556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Attention Volumes for usability studies in virtual reality","authors":"Thies Pfeiffer","doi":"10.1109/VR.2012.6180910","DOIUrl":"https://doi.org/10.1109/VR.2012.6180910","url":null,"abstract":"The time course and the distribution of visual attention are powerful measures for the evaluation of the usability of products. Eye tracking is thus an established method for evaluating websites, software ergonomy or modern cockpits for cars or airplanes. In most cases, however, the point of regard is measured on 2D products. This article presents work that uses an approach to measure the point of regard in 3D to generate 3D Attention Volumes as a qualitative 3D visualization of the distribution of visual attention. This visualization can be used to evaluate the design of virtual products in an immersive 3D setting, similar as heatmaps are used to assess the design of websites.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129952252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmenting moving planar surfaces interactively with video projection and a color camera","authors":"S. Audet, M. Okutomi, Masayuki Tanaka","doi":"10.1109/VR.2012.6180907","DOIUrl":"https://doi.org/10.1109/VR.2012.6180907","url":null,"abstract":"Traditional applications of augmented reality superimpose generated images onto the real world through goggles or monitors held between objects of interest and the user. To render the augmented surfaces interactive, we may exploit directly existing computer vision techniques. However, when using video projection to alter directly the appearance of surfaces, most vision-based algorithms fail. Even Wear Ur World [5], a recent and otherwise well-received interactive projector-camera system, relies on colored thimbles as markers. As notable exception, Tele-Graffiti [6] was designed for normal visible-light cameras without markers, but still considers the light emitted from the projector as unwanted interference, limiting its application.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129786639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}