{"title":"Virtual Experiences for Social Perspective-Taking","authors":"A. Raij, Aaron Kotranza, D. Lind, Benjamin C. Lok","doi":"10.1109/VR.2009.4811005","DOIUrl":"https://doi.org/10.1109/VR.2009.4811005","url":null,"abstract":"This paper proposes virtual social perspective-taking (VSP). In VSP, users are immersed in an experience of another person to aid in understanding the person's perspective. Users are immersed by 1) providing input to user senses from logs of the target person's senses, 2) instructing users to act and interact like the target, and 3) reminding users that they are playing the role of the target. These guidelines are applied to a scenario where taking the perspective of others is crucial - the medical interview. A pilot study (n = 16) using this scenario indicates VSP elicits reflection on the perspectives of others and changes behavior in future, similar social interactions. By encouraging reflection and change, VSP advances the state-of-the-art in training social interactions with virtual experiences.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto
{"title":"A VR Multimodal Interface for Small Artifacts in the Gold Museum","authors":"P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto","doi":"10.1109/VR.2009.4811061","DOIUrl":"https://doi.org/10.1109/VR.2009.4811061","url":null,"abstract":"The Gold Museum, in Bogotá, Colombia, displays the largest collection of pre-Hispanic gold artifacts in the world and it has been renovated recently. With funds from the Colombian Government, we have created a multimodal experience that allows visitors to touch, hear, and see small artifacts. Here we present a description of this demo, its functionality, and technical requirements.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"250 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116003600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian Wilke, Jonathan Metzgar, Keith Johnson, S. Semwal, B. Snyder, KaChun Yu, D. Neafus
{"title":"Crossover Applications","authors":"Brian Wilke, Jonathan Metzgar, Keith Johnson, S. Semwal, B. Snyder, KaChun Yu, D. Neafus","doi":"10.1109/VR.2009.4811068","DOIUrl":"https://doi.org/10.1109/VR.2009.4811068","url":null,"abstract":"VR applications provide an opportunity to study a variety of new applications. One of the focus areas of the media convergence, games and media integration (McGMI) program is to develop new media applications for the visually impaired population. We are particularly interested in developing applications which are at the same time interesting for the sighted population as well¿hence the title ¿ crossover applications. Bonnie Snyder, who has been working with the visually impaired population for more than twenty years, visited a group of students early in the Fall 2008 As many typical applications are geared toward sighted population, the cost of software and hardware systems tend to be a lot higher. In addition, several games, developed for primarily the sighted, provide minimal interaction for the blind. Although this issue remains a topic of discussion in both IEEE VR and ISMAR and related conferences, much more can be done. We used this as motivation and developed three applications for both the sighted and the visually impaired population (a) Hatpic chess program combines PHANToM force feedback interaction with OpenAL audio; (b) Simple hand movement recognition on iPhone provides a hierarchical menu application; (c) Barnyard fun program uses interesting animal-sound feedback to facilitate spatial selection. In future, we expect to conduct testing of these applications in Denver Museum as possible.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steven A. White, Mores Prachyabrued, Dhruva Baghi, Amit Aglawe, D. Reiners, C. Borst, Terry Chambers
{"title":"Virtual Welder Trainer","authors":"Steven A. White, Mores Prachyabrued, Dhruva Baghi, Amit Aglawe, D. Reiners, C. Borst, Terry Chambers","doi":"10.1109/VR.2009.4811066","DOIUrl":"https://doi.org/10.1109/VR.2009.4811066","url":null,"abstract":"The goal of this project is to develop a training system that can simulate the welding process in real-time and give feedback that avoids learning wrong motion patterns for beginning welders and can be used to analyze the process by the teacher afterwards. The system is based mainly on COTS components. A standard PC with a Dual-core CPU and a medium-end nVidia graphics card is sufficient. Input is done with a regular welding gun to allow realistic training. The gun is tracked by an OptiTrack system with 3 FLEX:V100 cameras. The same is also used to track a regular welding helmet to get accurate eye positions for display, which was chosen over glasses for robustness. The display itself is a Zalman Trimon stereo monitor that is laid out horizontally. The software is designed around a main simulation component for solving heat conduction on a grid of simulation points based on local GaussSeidel elimination.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129958495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"False Image Projector For Head Mounted Display Using Retrotransmissive Optical System","authors":"R. Kijima, J. Watanabe","doi":"10.1109/VR.2009.4811063","DOIUrl":"https://doi.org/10.1109/VR.2009.4811063","url":null,"abstract":"So called \"false image projector\" with a novel notion \"retrotransmission\" is proposed and early prototype will be shown in the demo. This article explains the other research activities of authors' lab as well.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crafting Personalized Facial Avatars Using Editable Portrait and Photograph Example","authors":"Tanasai Sucontphunt, Z. Deng, U. Neumann","doi":"10.1109/VR.2009.4811044","DOIUrl":"https://doi.org/10.1109/VR.2009.4811044","url":null,"abstract":"Computer-generated facial avatars have been increasingly used in a variety of virtual reality applications. Emulating the real-world face sculpting process, we present an interactive system to intuitively craft personalized 3D facial avatars by using 3D portrait editing and image example-based painting techniques. Starting from a default 3D face portrait, users can conveniently perform intuitive \"pulling\" operations on its 3D surface to sculpt the 3D face shape towards any individual. To automatically maintain the faceness of the 3D face being crafted, novel facial anthropometry constraints and a reduced face description space are incorporated into the crafting algorithms dynamically. Once the 3D face geometry is crafted, this system can automatically generate a face texture for the crafted model using an image example-based painting algorithm. Our user studies showed that with this system, users are able to craft a personalized 3D facial avatar efficiently on average within one minute.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Klein, J. Swan, G. Schmidt, M. Livingston, O. Staadt
{"title":"Measurement Protocols for Medium-Field Distance Perception in Large-Screen Immersive Displays","authors":"Eric Klein, J. Swan, G. Schmidt, M. Livingston, O. Staadt","doi":"10.1109/VR.2009.4811007","DOIUrl":"https://doi.org/10.1109/VR.2009.4811007","url":null,"abstract":"How do users of virtual environments perceive virtual space? Many experiments have explored this question, but most of these have used head-mounted immersive displays. This paper reports an experiment that studied large-screen immersive displays at medium-field distances of 2 to 15 meters. The experiment measured ego-centric depth judgments in a CAVE, a tiled display wall, and a real-world outdoor field as a control condition. We carefully modeled the outdoor field to make the three environments as similar as possible. Measuring egocentric depth judgments in large-screen immersive displays requires adapting new measurement protocols; the experiment used timed imagined walking, verbal estimation, and triangulated blind walking. We found that depth judgments from timed imagined walking and verbal estimation were very similar in all three environments. However, triangulated blind walking was accurate only in the out-door field; in the large-screen immersive displays it showed under-estimation effects that were likely caused by insufficient physical space to perform the technique. These results suggest using timed imagined walking as a primary protocol for assessing depth perception in large-screen immersive displays. We also found that depth judgments in the CAVE were more accurate than in the tiled display wall, which suggests that the peripheral scenery offered by the CAVE is helpful when perceiving virtual space.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115157101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teppei Toyoizumi, S. Yonekura, R. Tadakuma, Y. Kawaguchi, A. Kamimura
{"title":"Multiple Behaviors Generation by 1 D.O.F. Mobile Robot","authors":"Teppei Toyoizumi, S. Yonekura, R. Tadakuma, Y. Kawaguchi, A. Kamimura","doi":"10.1109/VR.2009.4811069","DOIUrl":"https://doi.org/10.1109/VR.2009.4811069","url":null,"abstract":"In this research, we developed a sphere-shaped mobile robot that can generate multiple behaviors by using only one motor. The robot can generate the translational motion and the rotational motion by controlling the motion of the motor. The motor itself acts as an eccentric weight during motions. To generate emergent behaviors, many protrusions are mounted on the surface of the spherical body. The emergent behaviors occur by an interaction between the external world and these protrusions when the sphere is vibrating, and the robot can move in a random walk manner.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114900598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Creation of Massive Virtual Cities","authors":"Charalambos (Charis) Poullis, Suya You","doi":"10.1109/VR.2009.4811023","DOIUrl":"https://doi.org/10.1109/VR.2009.4811023","url":null,"abstract":"This research effort focuses on the historically-difficult problem of creating large-scale (city size) scene models from sensor data, including rapid extraction and modeling of geometry models. The solution to this problem is sought in the development of a novel modeling system with a fully automatic technique for the extraction of polygonal 3D models from LiDAR (Light Detection And Ranging) data. The result is an accurate 3D model representation of the real-world as shown in Figure 1. We present and evaluate experimental results of our approach for the automatic reconstruction of large U. S. cities.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132575942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Virtual Reality Simulation for Nanoparticle Manipulation and Nanoassembly using Optical Tweezers","authors":"Krishna C. Bhavaraju","doi":"10.1109/VR.2009.4811040","DOIUrl":"https://doi.org/10.1109/VR.2009.4811040","url":null,"abstract":"Nanotechnology is one of the most promising technologies for future development. This paper proposes virtual reality (VR) as a tool to simulate nano particle manipulation using optical tweezers towards achieving nano-assembly and to handle effectively issues such as difficulty in viewing, perceiving and controlling the nano-scale objects. The simulation modeled using virtual reality displays all the forces acting on nanoparticle during the manipulation. The simulation is developed for particles that belong to the Rayleigh region and represents interactions of OT (a laser beam) with the nanoparticle. The laser beam aimed on to the nanoparticle traps the particle by applying optical forces. The trapped particle is then moved by moving the laser beam. The proposed VR based simulation tool with it capabilities can be easily extended and used for creating and open system framework by connecting it to a real OT setup to control nanoparticles manipulation. In addition, a feedback system can be build to increase of precision of movement.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114494845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}