{"title":"A remote mobile collaborative AR system for learning in physics","authors":"Jian Gu, Nai Li, H. Duh","doi":"10.1109/VR.2011.5759496","DOIUrl":"https://doi.org/10.1109/VR.2011.5759496","url":null,"abstract":"This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130519832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diana Araque, Ricardo Diaz, B. Perez-Gutierrez, A. Uribe
{"title":"Augmented reality motion-based robotics off-line programming","authors":"Diana Araque, Ricardo Diaz, B. Perez-Gutierrez, A. Uribe","doi":"10.1109/VR.2011.5759463","DOIUrl":"https://doi.org/10.1109/VR.2011.5759463","url":null,"abstract":"Augmented reality allows simulating, designing, projecting and validating robotic workcells in industrial environments that are not equipped with real manipulators. In this paper a study and implementation of a gestural programmed robotic workcell through augmented reality is presented. The result was an interactive environment in which the user can program an industrial robot through gestures, accomplished from the development of a computer based framework.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124249651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anca Stratulat, Vincent Roussarie, J. Vercher, C. Bourdin
{"title":"Improving the realism in motion-based driving simulators by adapting tilt-translation technique to human perception","authors":"Anca Stratulat, Vincent Roussarie, J. Vercher, C. Bourdin","doi":"10.1109/VR.2011.5759435","DOIUrl":"https://doi.org/10.1109/VR.2011.5759435","url":null,"abstract":"While modern dynamic driving simulators equipped with six degrees-of-freedom (6-DOF) hexapods and X-Y platforms have improved realism, mechanical limitations prevent them from offering a fully realistic driving experience. Solutions are often sought in the ”washout” algorithm, with linear accelerations simulated by an empirically chosen combination of translation and tilt-coordination, based on the incapacity of otolith organs to distinguish between inclination of the head and linear acceleration. In this study, we investigated the most effective combination of tilt and translation to provide a realistic perception of movement. We tested 3 different braking intensities (deceleration), each with 5 inverse proportional tilt/translation ratios. Subjects evaluated braking intensity using an indirect method corresponding to a 2-Alternative-Forced-Choice Paradigm. We find that perceived intensity of braking depends on the tilt/translation ratio used: for small and average decelerations (0.6 and 1.0m/s2), increased tilt yielded an increased overestimation of braking, inverse proportionally with intensity; for high decelerations (1.4m/s2), on half the conditions braking was overestimated with more tilt than translation and underestimated with more translation than tilt. We define a mathematical function describing the relationship between tilt, translation and the desired level of deceleration, intended as a supplement to motion cueing algorithms, that should improve the realism of driving simulations.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121116364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Fischbach, Dennis Wiebusch, Anke Giebler-Schubert, Marc Erich Latoschik, Stephan Rehfeld, H. Tramberend
{"title":"SiXton's curse — Simulator X demonstration","authors":"Martin Fischbach, Dennis Wiebusch, Anke Giebler-Schubert, Marc Erich Latoschik, Stephan Rehfeld, H. Tramberend","doi":"10.1109/VR.2011.5759495","DOIUrl":"https://doi.org/10.1109/VR.2011.5759495","url":null,"abstract":"We present SiXton's Curse-a computer game-to illustrate the benefits of a novel simulation platform. Simulator X [2] targets virtual, augmented, and mixed reality applications as well as computer games. The game simulates a medieval village called SiXton that can be explored and experienced using gestures and speech for input. SiXton's Curse utilizes multiple independent components for physical simulation, sound and graphics rendering, artificial intelligence, as well as for multi-modal interaction (MMI). The components are already an integral part of Simulator X's current version.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126179743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mengoni, B. Colaiocco, M. Peruzzini, M. Germani
{"title":"Design of a tactile display to support materials perception in virtual environments","authors":"M. Mengoni, B. Colaiocco, M. Peruzzini, M. Germani","doi":"10.1109/VR.2011.5759481","DOIUrl":"https://doi.org/10.1109/VR.2011.5759481","url":null,"abstract":"Materials properties simulation by means of haptic devices is one of the most significant issues in the design of new humancomputer interfaces to support virtual prototypes interaction in numerous product design activities. Notwithstanding the several research attempts, a very natural perception of materials has not been achieved yet. We present a novel tactile display. It combines both mechanical and electrotactile approaches to simulate natural tactile sensations. In order to enhance experience acoustic and visual cues are integrated. A signal generation method allows correlating materials properties and simulating signals according to the characteristics of fingertip mechanoreceptors. The final scope is making users perceive the object's surface roughness, slickness and texture coarseness. Research results are the developed simulation method and the detailed design of the whole tactile display. The preliminary prototype is under construction.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125203130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continual surface-based multi-projector blending for moving objects","authors":"P. Lincoln, G. Welch, H. Fuchs","doi":"10.1109/VR.2011.5759447","DOIUrl":"https://doi.org/10.1109/VR.2011.5759447","url":null,"abstract":"We introduce a general technique for blending imagery from multiple projectors on a tracked, moving, non-planar object. Our technique continuously computes visibility of pixels over the surfaces of the object and dynamically computes the per-pixel weights for each projector. This approach supports smooth transitions between areas of the object illuminated by different number of projectors, down to the illumination contribution of individual pixels within each polygon. To achieve real-time performance, we take advantage of graphics hardware, implementing much of the technique with a custom dynamic blending shader program within the GPU associated with each projector. We demonstrate the technique with some tracked objects being illuminated by three projectors.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114189531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grasping virtual objects with multi-point haptics","authors":"Q. Ang, B. Horan, Z. Najdovski, S. Nahavandi","doi":"10.1109/VR.2011.5759462","DOIUrl":"https://doi.org/10.1109/VR.2011.5759462","url":null,"abstract":"The majority of commercially available haptic devices offer a single point of haptic interaction. These devices are limited when it is desirable to grasp with multiple fingers in applications including virtual training, telesurgery and telemanipulation. Multipoint haptic devices serve to facilitate a greater range of interactions. This paper presents a gripper attachment to enable multi-point haptic grasping in virtual environments. The approach employs two Phantom Omni haptic devices to independently render forces to the user's thumb and other fingers. Compared with more complex approaches to multi-point haptics, this approach provides a number of advantages including low-cost, reliability and ease of programming. The ability of the integrated multi-point haptic platform to interact within a CHAI 3D virtual environment is also presented.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"134 Supplement_1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133602709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A little unreality in a realistic replica environment degrades distance estimation accuracy","authors":"Lane Phillips, V. Interrante","doi":"10.1109/VR.2011.5759485","DOIUrl":"https://doi.org/10.1109/VR.2011.5759485","url":null,"abstract":"Users of IVEs typically underestimate distances during blind walking tasks, even though they are accurate at this task in the real world. The cause of this underestimation is still not known. Our previous work found an exception to this effect: When the virtual environment was a realistic, co-located replica of the concurrently occupied real environment, users did not significantly underestimate distances. However, when the replica was rendered in an NPR style, we found that users underestimated distances. In this study we explore whether the inaccuracy in distance estimation could be due to lack of size and distance cues in our NPR IVE, or if it could be due to a lack of presence. We ran blind walking trials in a new replica IVE that combined features of the previous two IVEs. Participants significantly underestimated distances in this environment.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134573722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerated polyhedral visual hulls using OpenCL","authors":"Toby Duckworth, D. Roberts","doi":"10.1109/VR.2011.5759469","DOIUrl":"https://doi.org/10.1109/VR.2011.5759469","url":null,"abstract":"We present a method for reconstruction of the visual hull (VH) of an object in real-time from multiple video streams. A state of the art polyhedral reconstruction algorithm is accelerated by implementing it for parallel execution on a multi-core graphics processor (GPU). The time taken to reconstruct the VH is measured for both the accelerated and non-accelerated implementations of the algorithm, over a range of image resolutions and number of cameras. The results presented are of relevance to researchers in the field of 3D reconstruction at interactive frame rates (real-time), for applications such as telepresence.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125352354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shadow walking: An unencumbered locomotion technique for systems with under-floor projection","authors":"David J. Zielinski, Ryan P. McMahan, R. Brady","doi":"10.1109/VR.2011.5759456","DOIUrl":"https://doi.org/10.1109/VR.2011.5759456","url":null,"abstract":"When viewed from below, a user's feet cast shadows onto the floor screen of an under-floor projection system, such as a six-sided CAVE. Tracking those shadows with a camera provides enough information for calculating a user's ground-plane location, foot orientation, and footstep events. We present Shadow Walking, an unencumbered locomotion technique that uses shadow tracking to sense a user's walking direction and step speed. Shadow Walking affords virtual locomotion by detecting if a user is walking in place. In addition, Shadow Walking supports a sidestep gesture, similar to the iPhone's pinch gesture. In this paper, we describe how we implemented Shadow Walking and present a preliminary assessment of our new locomotion technique. We have found Shadow Walking provides advantages of being unencumbered, inexpensive, and easy to implement compared to other walking-in-place approaches. It also has potential for extended gestures and multi-user locomotion.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131163507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}