{"title":"Taking it to the streets: how virtual reality can change mobile computing","authors":"Steven K. Feiner","doi":"10.1109/VR.2003.1191114","DOIUrl":"https://doi.org/10.1109/VR.2003.1191114","url":null,"abstract":"Virtual reality has long been an indoor affair. Whether constrained by stationary computers ordisplays, or by the limitations of our tracking technologies, researchers typically build virtualenvironments that work within a single physical room or a portion of a room. Even distributedvirtual reality systems usually interconnect two or more such indoor spaces. Meanwhile, ascomputers grow ever smaller and faster, mobile computing is becoming an increasingly importantpart of our daily lives, accompanying us wherever we go, outdoors, as well as indoors.What will it take for virtual reality to move outdoors and finally see the light of day? And, whyshould we care? Within the virtual reality research community, work on augmented reality hasalready begun to explore outdoor environments-tracking using computer vision, gyroscopes,accelerometers, compasses, and GPS; and experimenting with (barely) wearable testbeds. I willdiscuss why virtual reality (especially in the form of augmented reality) and mobile computing are asynergistic combination, and will provide an overview of the research problems that must beaddressed for mobile augmented reality systems to play a major role in our future.Among the issues that I will review are overcoming physical and aesthetic barriers to mobilityand wearability; tracking and registration of heads, hands, bodies, and other objects; renderingvirtual objects in the real world; and developing sufficiently high quality displays. Equallyimportant is the design of head-tracked user interfaces that are well suited to mobility. Wearablesystems will need to support collaboration among mobile users, facile interaction with real andvirtual objects, and coordination across a wide range of heterogeneous displays and devices. Keyhere is the volatile nature of mobile interactions-users continually move into and out of thepresence of other users, devices, and objects, and rapidly change tasks. Furthermore, augmentedreality makes it possible for real and virtual objects to share the same display space, creating thepotential for a variety of visually confusing relationships as objects overlap and occlude each other.Avoiding these problems will require that the virtual world be redesigned and laid out on the fly, tomaintain desired visual relationships between virtual objects and other real and virtual objects.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116479809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"360 degree panoramic HMD immersion","authors":"K. Ghahremani, A. Rizzo, U. Neumann","doi":"10.1109/VR.2003.1191194","DOIUrl":"https://doi.org/10.1109/VR.2003.1191194","url":null,"abstract":"Panoramic video image acquisition is based on multiple overlapped sub-images. We will demonstrate high-resolution panoramic video by employing an array of five video cameras viewing the scene over a combined 360-degrees of horizontal arc and 50-degrees vertical. The five live video streams are digitized and processed in real time by a computer system. The camera lens distortions and colorimetric variations are corrected by the software application and a complete panoramic image is constructed in memory. Users can navigate the scene by wearing a head mounted display (HMD). A single window with a resolution of 800x600 is output to the HMD. A real-time (inertial-magnetic) orientation tracker is fixed to the HMD to sense the user’s head orientation. The orientation is reported to the viewing application through an IP socket, and the output display window is positioned (to mimic pan and tilt) within the full panoramic image in response to the user’s head orientation.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133059296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human movement performance in relation to path constraint - the law of steering in locomotion","authors":"Shumin Zhai, Rogier Woltjer","doi":"10.1109/VR.2003.1191133","DOIUrl":"https://doi.org/10.1109/VR.2003.1191133","url":null,"abstract":"We examine the law of steering - a quantitative model of human movement time in relation to path width and length previously established in hand drawing movement - in a VR locomotion paradigm. Participants drove a simulated vehicle in a virtual environment on paths whose shape and width were manipulated Results showed that the law of steering also applies to locomotion. Participants' mean trial completion times linearly correlated (r/sup 2/ between 0.985 and 0.999) with an index of difficulty quantified as path distance to width ratio for the straight and circular paths used in this experiment. Their average mean and maximum speed was linearly proportional to path width. Such human performance regularity provides a quantitative tool for 3D human machine interface design and evaluation.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131737241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Agus, Andrea Giachetti, E. Gobbetti, G. Zanetti, Antonio Zorcolo
{"title":"Adaptive techniques for real-time haptic and visual simulation of bone dissection","authors":"Marco Agus, Andrea Giachetti, E. Gobbetti, G. Zanetti, Antonio Zorcolo","doi":"10.1109/VR.2003.1191127","DOIUrl":"https://doi.org/10.1109/VR.2003.1191127","url":null,"abstract":"Bone dissection is an important component of many surgical procedures. In this paper we discuss adaptive techniques for providing real-time haptic and visual feedback during a virtual bone dissection simulation. The simulator is being developed as a component of a training system for temporal bone surgery. We harness the difference in complexity and frequency requirements of the visual and haptic simulations by modeling the system as a collection of loosely coupled concurrent components. The haptic component exploits a multi-resolution representation of the first two moments of the bone characteristic function to rapidly compute contact forces and determine bone erosion. The visual component uses a time-critical particle system evolution method to simulate secondary visual effects, such as bone debris accumulation, blooding, irrigation, and suction.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123289013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jim X. Chen, Yonggao Yang, Bowen Loftin, Geo. A. Mason
{"title":"MUVEES: a PC-based multi-user virtual environment for learning","authors":"Jim X. Chen, Yonggao Yang, Bowen Loftin, Geo. A. Mason","doi":"10.1109/VR.2003.1191135","DOIUrl":"https://doi.org/10.1109/VR.2003.1191135","url":null,"abstract":"This paper summarizes our NSF funded project, a PC-based multi-user learning environment: Multi-User Virtual Environment Experiential Simulator (MUVEES). The goal of this project is to create and evaluate graphical multi-user virtual environments that use digitized museum resources to enhance middle school students' motivation and learning about science. Here, we discuss the design, implementation, and applications of MUVEES. We present its structure, efficient approaches that achieve more realistic avatar behaviors, and pedagogical strategies that foster strong learning outcomes across a wide range of individual student characteristics. Our preliminary results indicate that MUVEES is a powerful vehicle for collaboration and learning. We believe that our system and implementation methods will help improve future multi-user virtual environments.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128344515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The relationship between presence and performance in virtual environments: results of a VERTS study","authors":"C. Youngblut, Odette Huie","doi":"10.1109/VR.2003.1191158","DOIUrl":"https://doi.org/10.1109/VR.2003.1191158","url":null,"abstract":"Understanding the conditions under which virtual environment (VE) users experience a sense of presence may, in the long term, yield valuable insights into human cognition and psychology. More immediately, however, the general assumption is that a sense of presence impacts a user's ability to perform a task and, therefore, that insights into presence offer a potential payoff in terms of task performance. This paper describes a recent study that investigated the possible relationship between presence and task performance in VEs.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128715249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Swan, Joseph L. Gabbard, D. Hix, R. Schulman, Keun Pyo Kim
{"title":"A comparative study of user performance in a map-based virtual environment","authors":"J. Swan, Joseph L. Gabbard, D. Hix, R. Schulman, Keun Pyo Kim","doi":"10.1109/VR.2003.1191149","DOIUrl":"https://doi.org/10.1109/VR.2003.1191149","url":null,"abstract":"We present a comparative study of user performance with tasks involving navigation, visual search, and geometric manipulation, in a map-based battlefield visualization virtual environment (VE). Specifically, our experiment compared user performance of the same task across four different VE platforms: desktop, cave, workbench, and wall. Independent variables were platform type, stereopsis (stereo, mono), movement control mode (rate, position), and frame of reference (egocentric, exocentric). Overall results showed that users performed tasks fastest using the desktop and slowest using the workbench. Other results are detailed in the article. Notable is that we designed our task in an application context, with tasking much closer to how users would actually use a real-world battlefield visualization system. This is very uncommon for comparative studies, which are usually designed with abstract tasks to minimize variance. This is, we believe, one of the first and most complex studies to comparatively examine, in an application context, this many key variables affecting VE user interface design.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122190600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob Eisenstein, Shahram Ghandeharizadeh, L. Golubchik, C. Shahabi, Donghui Yan, Roger Zimmermann
{"title":"Device independence and extensibility in gesture recognition","authors":"Jacob Eisenstein, Shahram Ghandeharizadeh, L. Golubchik, C. Shahabi, Donghui Yan, Roger Zimmermann","doi":"10.1109/VR.2003.1191141","DOIUrl":"https://doi.org/10.1109/VR.2003.1191141","url":null,"abstract":"Gesture recognition techniques often suffer from being highly device-dependent and hard to extend. If a system is trained using data from a specific glove input device, that system is typically unusable with any other input device. The set of gestures that a system is trained to recognize is typically not extensible, without retraining the entire system. We propose a novel gesture recognition framework to address these problems. This framework is based on a multi-layered view of gesture recognition. Only the lowest layer is device dependent, it converts raw sensor values produced by the glove to a glove-independent semantic description of the hand. The higher layers of our framework can be reused across gloves, and are easily extensible to include new gestures. We have experimentally evaluated our framework and found that it yields comparable performance to conventional techniques, while substantiating our claims of device independence and extensibility.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116993976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin C. Lok, Samir Naik, M. Whitton, F. Brooks
{"title":"Effects of handling real objects and avatar fidelity on cognitive task performance in virtual environments","authors":"Benjamin C. Lok, Samir Naik, M. Whitton, F. Brooks","doi":"10.1109/VR.2003.1191130","DOIUrl":"https://doi.org/10.1109/VR.2003.1191130","url":null,"abstract":"Immersive virtual environments (VEs) provide participants with computer-generated environments filled with virtual objects to assist in learning, training, and practicing dangerous and/or expensive tasks. But for certain tasks, does having every object being virtual inhibit the interactivity? Further, does the virtual object's visual fidelity affect performance? Overall VE effectiveness may be reduced if users spend most of their time and cognitive capacity learning how to interact and adapting to interacting with a purely virtual environment. We investigated how handling real objects and how self-avatar visual fidelity affects performance on a spatial cognitive task in an immersive VE. We compared participants' performance on a block arrangement task in both a real-space environment and several virtual and hybrid environments. The results showed that manipulating real objects in a VE brings task performance closer to that of real space, compared to manipulating virtual objects.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116920870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Gratch, P. Debevec, Dick Lindheim, Frédéric H. Pighin, J. Rickel, W. Swartout, D. Traum, Jackie Morie
{"title":"Hollywood meets simulation: creating immersive training environments at the ICT","authors":"J. Gratch, P. Debevec, Dick Lindheim, Frédéric H. Pighin, J. Rickel, W. Swartout, D. Traum, Jackie Morie","doi":"10.1109/VR.2003.1191180","DOIUrl":"https://doi.org/10.1109/VR.2003.1191180","url":null,"abstract":"The Institute for Creative Technologies is a federally funded research center set up three years ago at the University of Southern California to advance the state of the art in immersive training. Teaming researchers in artificial intelligence, graphics, animation and immersive audio with Hollywood writers, directors and special effect artists, the ICT brings a unique mix of high-technology and professional storytelling esthetic to the problem of creating compelling immersive environments. This afternoon tutorial will consist of a panel presentation by top ICT affiliated researchers to discuss this wide range of technologies and skills and how they relate to the design of virtual environments. The panel will be followed by a tour of the ICT facilities and demonstrations of several virtual training systems.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133200189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}