D. Coffey, Fedor Korsakov, M. Ewert, Haleh Hagh-Shenas, Lauren Thorson, Daniel F. Keefe
{"title":"A user study to understand motion visualization in virtual reality","authors":"D. Coffey, Fedor Korsakov, M. Ewert, Haleh Hagh-Shenas, Lauren Thorson, Daniel F. Keefe","doi":"10.1109/VR.2012.6180883","DOIUrl":"https://doi.org/10.1109/VR.2012.6180883","url":null,"abstract":"Studies of motion are fundamental to science. For centuries, pictures of motion have factored importantly in making scientific discoveries possible. Today, there is perhaps no tool more powerful than interactive virtual reality (VR) for conveying complex space-time data to scientists, doctors, and others; however, relatively little is known about how to design virtual environments in order to best facilitate these analyses. In designing virtual environments for presenting scientific motion data (e.g., 4D data captured via medical imaging or motion tracking) our intuition is most often to “reanimate” these data in VR, displaying moving virtual bones and other 3D structures in virtual space as if the viewer were watching the data being collected in a biomechanics lab. However, recent research in other contexts suggests that although animated displays are effective for presenting known trends, static displays are more effective for data analysis.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"10 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129071568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Blum, Valerie Kleeberger, Christoph Bichlmeier, Nassir Navab
{"title":"mirracle: Augmented Reality in-situ visualization of human anatomy using a magic mirror","authors":"T. Blum, Valerie Kleeberger, Christoph Bichlmeier, Nassir Navab","doi":"10.1109/VR.2012.6180934","DOIUrl":"https://doi.org/10.1109/VR.2012.6180934","url":null,"abstract":"The mirracle system extends the concept of an Augmented Reality (AR) magic mirror to the visualization of human anatomy on the body of the user. Using a medical volume renderer a CT dataset is augmented onto the user. By a slice based user interface, slice from the CT and an additional photographic dataset can be selected.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129565397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Gotsis, Amanda Tasse, Maximilian Swider, V. Lympouridis, Irina C. Poulos, A. Thin, David Turpin, Diane Tucker, M. Jordan-Marsh
{"title":"Mixed reality game prototypes for upper body exercise and rehabilitation","authors":"M. Gotsis, Amanda Tasse, Maximilian Swider, V. Lympouridis, Irina C. Poulos, A. Thin, David Turpin, Diane Tucker, M. Jordan-Marsh","doi":"10.1109/VR.2012.6180940","DOIUrl":"https://doi.org/10.1109/VR.2012.6180940","url":null,"abstract":"This research demonstration consists of an integrated hardware and software platform developed for rapid prototyping of virtual reality-based games for upper body exercise and rehabilitation. The exercise protocol has been adopted from an evidence-based shoulder exercise program for individuals with spinal cord injury. The hardware consists of a custom metal rig that holds a standard wheelchair, six Gametraks attached to elastic exercise bands, a Microsoft Kinect, a laptop and a large screen. A total of 21 prototypes were built using drivers for Kinect, MaxMSP and Unity Pro 3 in order to evaluate game ideas based on deconstruction of the exercise protocol. Future directions include validation of our heuristic design and evaluation model and the development of an exercise suite of point-of-care VR games.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125981287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Hodgson, E. Bachmann, David Waller, Andrew Bair, Andrew Oberlin
{"title":"Virtual reality in the wild: A self-contained and wearable simulation system","authors":"Eric Hodgson, E. Bachmann, David Waller, Andrew Bair, Andrew Oberlin","doi":"10.1109/VR.2012.6180929","DOIUrl":"https://doi.org/10.1109/VR.2012.6180929","url":null,"abstract":"We implement and describe a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites, and makes them available in any space, such as a high-school gymnasium or a public park. Our hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location with no external infrastructure and little-to-no setup required. We demonstrate the ability of this system to provide realistically motion-tracked navigation for users and to generate usable behavioral data by having participants navigate through a full-scale virtual grocery store while physically situated in a grassy field. Applications for behavioral research and use cases for other fields are discussed.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121717994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo R. Corral-Soto, R. Tal, Larry Wang, R. Persad, Luo Chao, C. Solomon, Bob Hou, G. Sohn, J. Elder
{"title":"3DTown: The automatic urban awareness project","authors":"Eduardo R. Corral-Soto, R. Tal, Larry Wang, R. Persad, Luo Chao, C. Solomon, Bob Hou, G. Sohn, J. Elder","doi":"10.1109/VR.2012.6180895","DOIUrl":"https://doi.org/10.1109/VR.2012.6180895","url":null,"abstract":"In this work the goal is to develop a distributed system for sensing, interpreting and visualizing the real-time dynamics of urban life within the 3D context of a city focusing on typical, useful dynamic information such as walking pedestrians and moving vehicles captured by pan-tilt-zoom (PTZ) video cameras. Three-dimensionalization of the data extracted from video cameras is achieved by an algorithm that uses the Manhattan structure of the urban scene to automatically estimate the camera pose. Thus, if the pose of the video camera changes, our system will automatically update the corresponding projection matrix to maintain accurate geo-location of the scene dynamics.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133196705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Satyavolu, G. Bruder, P. Willemsen, Frank Steinicke
{"title":"Analysis of IR-based virtual reality tracking using multiple Kinects","authors":"S. Satyavolu, G. Bruder, P. Willemsen, Frank Steinicke","doi":"10.1109/VR.2012.6180925","DOIUrl":"https://doi.org/10.1109/VR.2012.6180925","url":null,"abstract":"This article presents an analysis of using multiple Microsoft Kinects to track users in a VR system. More specifically, we analyse the capability of Kinects to track infrared points for use in VR applications. Multiple Kinect sensors may serve as a low cost and affordable means to track position information across a large lab space in applications where precise location tracking is not necessary. We present our findings and analysis of the tracking range of a Kinect sensor in situations in which multiple Kinects are present. Overall, the Kinect sensor works well for this application and in lieu of more expensive options, the Kinect sensors may be a viable option for very low-cost tracking in VR applications.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124498233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuji Uema, Naoya Koizumi, Shian Wei Chang, K. Minamizawa, M. Sugimoto, M. Inami
{"title":"Optical camouflage III: Auto-stereoscopic and multiple-view display system using retro-reflective projection technology","authors":"Yuji Uema, Naoya Koizumi, Shian Wei Chang, K. Minamizawa, M. Sugimoto, M. Inami","doi":"10.1109/VR.2012.6180880","DOIUrl":"https://doi.org/10.1109/VR.2012.6180880","url":null,"abstract":"This paper presents a new type of optical camouflage system based on the retro-reflective projection technology. Retro-reflective projection is a method used to create augmented reality that combines the virtual world with the real world. The conventional model of an optical camouflage system consists of a retro-reflective screen, a projection source and a beam splitter. In such a setup, the user needs to observe an object covered with the retro-reflective screen through a single viewpoint. This is called a monocular system. In our new setup, our aim is to construct a system that has multiple viewpoints by applying a novel projection array system. We will describe the method with which this projection array system is achieved using one projection source, the configuration of the system, and the trade-offs of the system. In addition, we will describe an application of our system in a car. The installed system makes the backseat virtually transparent, allowing the driver to see the blind spots at the rear when reversing the car.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"6 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120815151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannick Dennemont, Guillaume Bouyer, S. Otmane, M. Mallem
{"title":"3D Interaction assistance through context-awareness","authors":"Yannick Dennemont, Guillaume Bouyer, S. Otmane, M. Mallem","doi":"10.1109/VR.2012.6180903","DOIUrl":"https://doi.org/10.1109/VR.2012.6180903","url":null,"abstract":"This work focuses on enabling 3D interaction assistance by adding adaptivity depending on the tasks, objectives, and the general interaction context. We model the context using Conceptual Graphs (CG) based on an ontology. Including CG in our scene manager (Virtools) allows us to add semantic information and to describe the available tools. We handle rules leading to adaptation with a logic programming layer (Prolog+CG) included in the Amine platform. This project is a step towards Intelligent Virtual Environments, which proposes a hybrid solution by adding a separate semantic reasoning to classic environments. The first case study automatically manages few modalities depending on the distance to objects, user movement, available tools and modality risks.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125200510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}