B. Spanlang, Xavi Navarro, Jean-Marie Normand, Sameer Kishore, Rodrigo Pizarro, M. Slater
{"title":"Real time whole body motion mapping for avatars and robots","authors":"B. Spanlang, Xavi Navarro, Jean-Marie Normand, Sameer Kishore, Rodrigo Pizarro, M. Slater","doi":"10.1145/2503713.2503747","DOIUrl":"https://doi.org/10.1145/2503713.2503747","url":null,"abstract":"We describe a system that allows for controlling different robots and avatars from a real time motion stream. The underlying problem is that motion data from tracking systems is usually represented differently to the motion data required to drive an avatar or a robot: there may be different joints, motion may be represented by absolute joint positions and rotations or by a root position, bone lengths and relative rotations in the skeletal hierarchy. Our system resolves these issues by remapping in real time the tracked motion so that the avatar or robot performs motions that are visually close to those of the tracked person. The mapping can also be reconfigured interactively at run-time. We demonstrate the effectiveness of our system by case studies in which a tracked person is embodied as an avatar in immersive virtual reality or as a robot in a remote location. We show this with a variety of tracking systems, humanoid avatars and robots.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"9 1","pages":"175-178"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85164449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust prediction of auditory step feedback for forward walking","authors":"Markus Zank, Thomas Nescher, A. Kunz","doi":"10.1145/2503713.2503735","DOIUrl":"https://doi.org/10.1145/2503713.2503735","url":null,"abstract":"Virtual reality systems supporting real walking as a navigation interface usually lack auditory step feedback, although this could give additional information to the user e.g. about the ground he is walking on. In order to add matching auditory step feedback to virtual environments, we propose a calibration-free and easy to use system that can predict the occurrence time of stepping sounds based on human gait data.\u0000 Our system is based on the timing of reliably occurring characteristic events in the gait cycle which are detected using foot mounted accelerometers and gyroscopes. This approach not only allows us to detect but to predict the time of an upcoming step sound in realtime. Based on data gathered in an experiment, we compare different suitable events that allow a tradeoff between the maximum precision of the prediction and the maximum time by which the sound can be predicted.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"36 1","pages":"119-122"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90052203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Refurbish a single user 3D application into a multi-user distributed service: a case study","authors":"N. A. Nijdam, Y. Tisserand, N. Magnenat-Thalmann","doi":"10.1145/2503713.2503721","DOIUrl":"https://doi.org/10.1145/2503713.2503721","url":null,"abstract":"Through a multitude of different devices, such as phones, tablets, desktop systems etc., we are able to exchange data across the world, independently of location, time and the device used. Almost by default applications are extended with networking capabilities, either deployed locally on the client device and connecting to a server or, as the trend is now, fully hosted on the Internet (servers) as a service (cloud services). However many 3D applications are still restricted to a single platform, as it is costly in terms of developing, maintaining, adapting and providing support for multiple platforms (software as well for hardware dependencies). Therefore applications that we see now available on a variety of devices are either single-platform, single-user, non-real-time collaborative or graphically not demanding. By using an adaptive remote rendering approach it is feasible to take advantage of these new devices and provide means for old and new 3D oriented applications to be used in collaborative environments. In this paper, we look at the conversion of a single user 3D application into a multi-user service. Analyse the requirements needed for adapting the software for being integrated into the \"Herd framework\". Offering remote rendering to end devices, a single instance accessible to multiple users and in order to optimize each instance of the application for different devices the user interface representation is handled in a dynamically using a device profile, as well for handling different input techniques.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"6 1","pages":"193-200"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73279425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre Bourdin, Josep Maria Tomàs Sanahuja, Carlota Crusafon Moya, P. Haggard, M. Slater
{"title":"Persuading people in a remote destination to sing by beaming there","authors":"Pierre Bourdin, Josep Maria Tomàs Sanahuja, Carlota Crusafon Moya, P. Haggard, M. Slater","doi":"10.1145/2503713.2503724","DOIUrl":"https://doi.org/10.1145/2503713.2503724","url":null,"abstract":"We built a Collaborative Virtual Environment (CVE) allowing one person, the 'visitor' to be digitally transported to a remote destination to interact with local people there. This included full body tracking, vibrotactile feedback and voice. This allowed interactions in the same CVE between multiple people situated in different physical remote locations. This system was used for an experiment to study whether the conveyance of touch has an impact on the willingness of participants embodied in the CVE to sing in public.\u0000 In a first experimental condition, the experimenter virtually touched the avatar of the participants on the shoulder, producing vibrotactile feedback. In another condition using the identical physical setup, the vibrotactile displays were not activated, so that they would not feel the touch. Our hypothesis was that the tactile touch condition would produce a greater likelihood of compliance with the request to sing. In a second part we examined the hypothesis that people might be more willing to sing (execute an embarrassing task) in a CVE, because of the anonymity provided by virtual reality. Hence we carried out a similar study in physical reality.\u0000 The results suggest that the tactile intervention had no effect on the sensations of body ownership, presence or the behaviours of the participants, in spite of the finding that the sensation of touch itself was effectively realised. Moreover we found an overall similarity in responses between the VR and real conditions.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"3 1","pages":"123-132"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73801026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Video inlays: a system for user-friendly matchmove","authors":"Dmitry Rudoy, Lihi Zelnik-Manor","doi":"10.1145/2503713.2503741","DOIUrl":"https://doi.org/10.1145/2503713.2503741","url":null,"abstract":"Digital editing technology is highly popular as it enables to easily change photos and add to them artificial objects. Conversely, video editing is still challenging and mainly left to the professionals. Even basic video manipulations involve complicated software tools that are typically not adopted by the amateur user. In this paper we propose a system that allows an amateur user to performs a basic matchmove by adding an inlay to a video. Our system does not require any previous experience and relies on a simple user interaction. We allow adding 3D objects and volumetric textures to virtually any video. We demonstrate the method's applicability on a variety of videos downloaded from the web.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"12 1","pages":"219-222"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82648710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Nagendran, Remo Pillat, A. Kavanaugh, G. Welch, C. Hughes
{"title":"AMITIES: avatar-mediated interactive training and individualized experience system","authors":"A. Nagendran, Remo Pillat, A. Kavanaugh, G. Welch, C. Hughes","doi":"10.1145/2503713.2503731","DOIUrl":"https://doi.org/10.1145/2503713.2503731","url":null,"abstract":"This paper presents an architecture to control avatars and virtual characters in remote interaction environments. A human-in-the-loop (interactor) metaphor provides remote control of multiple virtual characters, with support for multiple interactors and multiple observers. Custom animation blending routines and a gesture-based interface provide interactors with an intuitive digital puppetry paradigm. This paradigm reduces the cognitive and physical loads on the interactor while supporting natural bi-directional conversation between a user and the virtual characters or avatar counterparts. A multi-server-client architecture, based on a low-demand network protocol, connects the user environment, interactor station(s) and observer station(s). The associated system affords the delivery of personalized experiences that adapt to the actions and interactions of individual users, while staying true to each virtual character's personality and backstory. This approach has been used to create experiences designed for training, education, rehabilitation, remote presence and other-related applications.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"62 1","pages":"143-152"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84023177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyprien Pindat, Emmanuel Pietriga, O. Chapuis, C. Puech
{"title":"Drilling into complex 3D models with gimlenses","authors":"Cyprien Pindat, Emmanuel Pietriga, O. Chapuis, C. Puech","doi":"10.1145/2503713.2503714","DOIUrl":"https://doi.org/10.1145/2503713.2503714","url":null,"abstract":"Complex 3D virtual scenes such as CAD models of airplanes and representations of the human body are notoriously hard to visualize. Those models are made of many parts, pieces and layers of varying size, that partially occlude or even fully surround one another. We introduce Gimlenses, a multi-view, detail-in-context visualization technique that enables users to navigate complex 3D models by interactively drilling holes into their outer layers to reveal objects that are buried, possibly deep, into the scene. Those holes get constantly adjusted so as to guarantee the visibility of objects of interest from the parent view. Gimlenses can be cascaded and constrained with respect to one another, providing synchronized, complementary viewpoints on the scene. Gimlenses enable users to quickly identify elements of interest, get detailed views of those elements, relate them, and put them in a broader spatial context.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"25 1","pages":"223-230"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75762900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Wonner, J. Grosjean, Antonio Capobianco, D. Bechmann
{"title":"Bubble bee, an alternative to arrow for pointing out directions","authors":"Jonathan Wonner, J. Grosjean, Antonio Capobianco, D. Bechmann","doi":"10.1145/2503713.2503753","DOIUrl":"https://doi.org/10.1145/2503713.2503753","url":null,"abstract":"We present Bubble Bee - an extension for the 3D bubble cursor in Virtual Environments (VEs). This technique provides an alternative to arrows for pointing out a direction in a 3D scene.\u0000 Bubble Bee is based on a ring concept. A circular ring in 3D appears like an ellipse, according to its orientation. This orientation is easy to infer by comparing the minor radius which varies with the view angle, to the reference major radius which is constant and equal to the radius of the ring. Bubble Bee is a sphere with several rings oriented towards the same direction. The rings give a natural axis to the sphere. A color gradient sets the direction of this axis.\u0000 We compared the performance of Bubble Bee and a 3D arrow through an experiment. The participants were asked to indicate which object was pointed by the two competing techniques. No significant differences on decision time were found, while Bubble Bee was shown to be nearly as accurate as a 3D arrow.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"35 1","pages":"97-100"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85560608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Mercier-Ganady, E. Loup-Escande, Laurent George, C. Busson, M. Marchal, A. Lécuyer
{"title":"Can we use a brain-computer interface and manipulate a mouse at the same time?","authors":"Jonathan Mercier-Ganady, E. Loup-Escande, Laurent George, C. Busson, M. Marchal, A. Lécuyer","doi":"10.1145/2503713.2503744","DOIUrl":"https://doi.org/10.1145/2503713.2503744","url":null,"abstract":"Brain-Computer Interfaces (BCI) introduce a novel way of interacting with real and virtual environments by directly exploiting cerebral activity. However in most setups using a BCI, the user is explicitly asked to remain as motionless as possible, since muscular activity is commonly admitted to add noise and artifacts in brain electrical signals. Thus, as for today, people have been rarely let using other classical input devices such as mice or joysticks simultaneously to a BCI-based interaction. In this paper, we present an experimental study on the influence of manipulating an input device such as a standard computer mouse on the performance of a BCI system. We have designed a 2-class BCI which relies on Alpha brainwaves to discriminate between focused versus relaxed mental activities. The study uses a simple virtual environment inspired by the well-known Pac-Man videogame and based on BCI and mouse controls. The control of mental activity enables to eat pellets in a simple 2D virtual maze. Different levels of motor activity achieved with the mouse are progressively introduced in the gameplay: 1) no motor activity (control condition), 2) a semi-automatic motor activity, and 3) a highly-demanding motor activity. As expected the BCI performance was found to slightly decrease in presence of motor activity. However, we found that the BCI could still be successfully used in all conditions, and that relaxed versus focused mental activities could still be significantly discriminated even in presence of a highly-demanding mouse manipulation. These promising results pave the way to future experimental studies with more complex mental and motor activities, but also to novel 3D interaction paradigms that could mix BCI and other input devices for virtual reality and videogame applications.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"161 1","pages":"69-72"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80179382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distorted shadow mapping","authors":"Nixiang Jia, Dening Luo, Yanci Zhang","doi":"10.1145/2503713.2503746","DOIUrl":"https://doi.org/10.1145/2503713.2503746","url":null,"abstract":"In this paper, a novel algorithm named Distorted Shadow Maps (DSMs) is proposed to generate high-quality hard shadows in real-time. The method focuses on addressing the shadow aliasing caused by different sample distribution between light and camera space. Inspired by the fact that such aliasing occurs in the depth-discontinuous regions of shadow map, in DSMs, a sample redistribution mechanism is designed to enlarge the geometric shadow silhouette regions by shrinking the regions that are completely in light or in shadows. Consequently, more texels in the shadow map are covered by the geometric silhouettes, indicating that silhouettes get more samples. The experimental results show that the jagged edges of hard shadows are reduced by the DSMs algorithm.","PeriodicalId":93673,"journal":{"name":"Proceedings of the ACM Symposium on Virtual Reality Software and Technology. ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"209-214"},"PeriodicalIF":0.0,"publicationDate":"2013-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91128860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}