{"title":"The asynchronous time warp for virtual reality on consumer hardware","authors":"J. V. van Waveren","doi":"10.1145/2993369.2993375","DOIUrl":"https://doi.org/10.1145/2993369.2993375","url":null,"abstract":"To help create a true sense of presence in a virtual reality experience, a so called \"time warp\" may be used. This time warp does not only correct for the optical aberration of the lenses used in a virtual reality headset, it also transforms the stereoscopic images based on the very latest head tracking information to significantly reduce the motion-to-photon delay (or end-to-end latency). The time warp operates as close as possible to the display refresh, retrieves updated head tracking information and transforms a stereoscopic pair of images from representing a view at the time it was rendered, to representing the correct view at the time it is displayed. When run asynchronously to the stereoscopic rendering, the time warp can be used to increase the perceived frame rate and to smooth out inconsistent frame rates. Asynchronous operation can also improve the overall graphics hardware utilization by not requiring the stereoscopic rendering to be synchronized with the display refresh cycle. However, on today's consumer hardware it is challenging to implement a high quality time warp that is fast, has predictable latency and throughput, and runs asynchronously. This paper discusses the various challenges and the different trade-offs that need to be considered when implementing an asynchronous time warp on consumer hardware.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"121 1","pages":"37-46"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82249095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaosheng Li, Xiaowei He, Xuehui Liu, Baoquan Liu, E. Wu
{"title":"Multiphase surface tracking with explicit contouring","authors":"Xiaosheng Li, Xiaowei He, Xuehui Liu, Baoquan Liu, E. Wu","doi":"10.1145/2671015.2671017","DOIUrl":"https://doi.org/10.1145/2671015.2671017","url":null,"abstract":"We introduce a novel framework for tracking multiphase interfaces with explicit contouring technique. In our framework, an unsigned distance function and an additional indicator function are used to represent the multiphase system. Our method maintains the explicit polygonal meshes that define the multiphase interfaces. At each step, distance function and indicator function are updated via semi-Lagrangian path tracing from the meshes of the last step. Interface surfaces are then reconstructed by polygonization procedures with precomputed stencils and further smoothed with a feature-preserving non-manifold smoothing algorithm to stay in good quality. Our method is easy to be implemented and incorporated into multiphase simulation, such as immiscible fluids, crystal grain growth and geometric flows. We demonstrate our method with several level set tests, including advection, propagation, etc., and couple it to some existing fluid simulators. The results show that our approach is stable, flexible, and effective for tracking multiphase interfaces.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"30 1","pages":"31-40"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73645463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtualized welding: a new paradigm for tele-operated welding","authors":"Bo Fu, Yukang Liu, Yuming Zhang, Ruigang Yang","doi":"10.1145/2671015.2671136","DOIUrl":"https://doi.org/10.1145/2671015.2671136","url":null,"abstract":"We present a new mixed reality system that supports tele-operation of a welding robot. We create a 3D mockup of the welding pieces and use projector-based displays to visualize the welding process directly on the 3D display. Multi-cameras are used to capture both the welding environment and the operator's motion. The welder can therefore monitor and control the welding process as if the welding is on the mock-up, which provides proper spatial and 3D cues. We evaluated our system with a number of control tasks and the results shows the effectiveness of our system as compared to traditional alternatives.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"96 1","pages":"241-242"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78348654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FishEyA: live broadcasting around 360 degrees","authors":"E. Canessa, L. Tenze","doi":"10.1145/2671015.2671135","DOIUrl":"https://doi.org/10.1145/2671015.2671135","url":null,"abstract":"Our project aims to build up a low-cost prototype system for cognitive studies around a live 360 degrees vision. The final goal is to have an original broadcasting channel that could transmit and cover in real time a panoramic vision at a distance and with minimal computation. The first phase of our project named FishEyA is to develop the software optimized to run in mini-computers like Raspberry Pi, having a light GUI to easily configure the 360° visual field and activate the streaming signal.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"17 1","pages":"227-228"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86109824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A view from the hill: where cross reality meets virtual worlds","authors":"C. Davies, Alan Miller, C. Allison","doi":"10.1145/2671015.2671138","DOIUrl":"https://doi.org/10.1145/2671015.2671138","url":null,"abstract":"We present the cross reality [Lifton 2007] system 'Mirrorshades', which enables a user to be present and aware of both a virtual reality environment and the real world at the same time. In so doing the challenge of the vacancy problem is addressed by lightening the cognitive load needed to switch between realities and to navigate the virtual environment. We present a case study in the context of a cultural heritage application wherein users are able to compare a reconstruction of an important 15th century chapel with its present day instantiation, whilst walking through them.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"34 1","pages":"213"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83123523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Tecchia, Giovanni Avveduto, R. Brondi, M. Carrozzino, M. Bergamasco, L. Alem
{"title":"I'm in VR!: using your own hands in a fully immersive MR system","authors":"F. Tecchia, Giovanni Avveduto, R. Brondi, M. Carrozzino, M. Bergamasco, L. Alem","doi":"10.1145/2671015.2671123","DOIUrl":"https://doi.org/10.1145/2671015.2671123","url":null,"abstract":"This paper presents a novel fully immersive Mixed Reality system that we have recently developed where the user freely walks in a life-size virtual scenario wearing an HMD and can see and use her/his own body when interacting with objects. This form of natural interaction is made possible in our system because the user's hands are real-time captured by means of a RGBD camera on the HMD. This allow the system to have in real-time a texturized geometric mesh of the hands and body (as seen from her/his own perspective) that can be rendered like any other polygonal model in the scene. Our hypothesis is that by presenting to the users an egocentric view of the virtual environment \"populated\" by their own bodies, a very strong feeling of presence is developed as well.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"62 1","pages":"73-76"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89586006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Desktop virtual reality for emergency preparedness: user evaluation of an aircraft ditching experience under different fear arousal conditions","authors":"L. Chittaro, Fabio Buttussi, Nicola Zangrando","doi":"10.1145/2671015.2671025","DOIUrl":"https://doi.org/10.1145/2671015.2671025","url":null,"abstract":"Virtual Reality (VR), in the form of 3D interactive simulations of emergency scenarios, is increasingly used for emergency preparedness training. This paper advances knowledge about different aspects of such virtual emergency experiences, showing that: (i) the designs we propose in the paper are effective in improving emergency preparedness of common citizens, considering aviation safety as a relevant case study, (ii) changing specific visual and auditory features is effective to create emotionally different versions of the same experience, increasing the level of fear aroused in users, and (iii) the protection motivation role of fear highlighted by psychological studies of traditional media applies to desktop VR too.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"15 1","pages":"141-150"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75644154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards real-time credible and scalable agent-based simulations of autonomous pedestrians navigation","authors":"Patrick Simo Kanmeugne, A. Beynier","doi":"10.1145/2671015.2671030","DOIUrl":"https://doi.org/10.1145/2671015.2671030","url":null,"abstract":"In this paper, we focus on real-time simulation of autonomous pedestrians navigation. We introduce a Macroscopic-Influenced Microscopic (MIM) approach which aims at reducing the gap between microscopic and macroscopic approaches by providing credible walking paths for a potentially highly congested crowd of autonomous pedestrians. Our approach originates from a least-effort formulation of the navigation task, which allows us to consistently account for congestion at every level of decision. We use the multi-agent paradigm and describe pedestrians as autonomous and situated agents who plan dynamically for energy efficient paths and interact with each other through the environment. The navigable space is considered as a set of contiguous resources that agents use to build their paths. We emulate the dynamic path computation for each agent with an evolutionary search algorithm, especially designed to be executed in real-time, individually and autonomously. We have compared an implementation of our approach with the ORCA model, on low density and high density scenarios, and obtained promising results in terms of credibility and scalability. We believe that ORCA model and other microscopic models could be easily extended to embrace our approach, thus providing richer simulations of potentially highly congested crowd of autonomous pedestrians.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"127-136"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74349604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicholas T. Swafford, B. Boom, K. Subr, David Sinclair, D. Cosker, Kenny Mitchell
{"title":"Dual sensor filtering for robust tracking of head-mounted displays","authors":"Nicholas T. Swafford, B. Boom, K. Subr, David Sinclair, D. Cosker, Kenny Mitchell","doi":"10.1145/2671015.2675694","DOIUrl":"https://doi.org/10.1145/2671015.2675694","url":null,"abstract":"We present a low-cost solution for yaw drift in head-mounted display systems that performs better than current commercial solutions and provides a wide capture area for pose tracking. Our method applies an extended Kalman filter to combine marker tracking data from an overhead camera with onboard head-mounted display accelerometer readings. To achieve low latency, we accelerate marker tracking with color blob localisation and perform this computation on the camera server, which only transmits essential pose data over WiFi for an unencumbered virtual reality system.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"13 1","pages":"221-222"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79304993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accelerating vision-based 3D indoor localization by distributing image processing over space and time","authors":"D. Yun, Hyunseok Chang, T. V. Lakshman","doi":"10.1145/2671015.2671018","DOIUrl":"https://doi.org/10.1145/2671015.2671018","url":null,"abstract":"In a vision-based 3D indoor localization system, conducting localization of user's device at a high frame rate is important to support real-time augment reality applications. However, vision-based 3D localization typically involves 2D keypoint detection and 2D-3D matching processes, which are in general too computationally intensive to be carried out at a high frame rate (e.g., 30 fps) on commodity hardware such as laptops or smartphones. In order to reduce per-frame computation time for 3D localization, we present a new method that distributes required computation over space and time, by splitting a video frame region into multiple sub-blocks, and processing only a sub-block in a rotating sequence at each video frame. The proposed method is general enough that it can be applied to any keypoint detection and 2D-3D matching schemes. We apply the method in a prototype 3D indoor localization system, and evaluate its performance in a 120m long indoor hallway environment using 5,200 video frames of 640x480 (VGA) resolution and a commodity laptop. When SIFT-based keypoint detection is used, our method reduces average and maximum computation time per frame by a factor of 10 and 7 respectively, with a marginal increase of positioning error (e.g., 0.17 m). This improvement enables the frame processing rate to increase from 3.2 fps to 23.3 fps.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"88 1","pages":"77-86"},"PeriodicalIF":0.0,"publicationDate":"2014-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79514362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}