M. Sagardia, T. Hulin, K. Hertkorn, Philipp Kremer, Simon Schätzle
{"title":"A platform for bimanual virtual assembly training with haptic feedback in large multi-object environments","authors":"M. Sagardia, T. Hulin, K. Hertkorn, Philipp Kremer, Simon Schätzle","doi":"10.1145/2993369.2993386","DOIUrl":"https://doi.org/10.1145/2993369.2993386","url":null,"abstract":"We present a virtual reality platform which addresses and integrates some of the currently challenging research topics in the field of virtual assembly: realistic and practical scenarios with several complex geometries, bimanual six-DoF haptic interaction for hands and arms, and intuitive navigation in large workspaces. We put an especial focus on our collision computation framework, which is able to display stiff and stable forces in 1 kHz using a combination of penalty- and constraint-based haptic rendering methods. Interaction with multiple arbitrary geometries is supported in realtime simulations, as well as several interfaces, allowing for collaborative training experiences. Performance results for an exemplary car assembly sequence which show the readiness of the system are provided.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concept for content-aware, automatic shifting for spherical panoramas","authors":"Daniel Pohl, O. Grau","doi":"10.1145/2993369.2996297","DOIUrl":"https://doi.org/10.1145/2993369.2996297","url":null,"abstract":"With the adaption of virtual reality in the consumer space, spherical panorama photos are gaining popularity. Through wide-angle head-mounted displays, they can be experienced in a natural way and offer the user an immersive view of the captured scene. While being used in virtual reality, the alignment of the saved image does not matter much. However, when displaying the panorama on a 2D screen, the alignment can make a difference on how pleasant the image looks. We propose an automatic method to do lossless shifting of the image to make it look better on 2D screens.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116345691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Cutolo, S. Carli, P. Parchi, Luca Canalini, M. Ferrari, M. Lisanti, V. Ferrari
{"title":"AR interaction paradigm for closed reduction of long-bone fractures via external fixation","authors":"F. Cutolo, S. Carli, P. Parchi, Luca Canalini, M. Ferrari, M. Lisanti, V. Ferrari","doi":"10.1145/2993369.2996317","DOIUrl":"https://doi.org/10.1145/2993369.2996317","url":null,"abstract":"We present an intuitive and ergonomic AR strategy to be coupled with a standard external fixation system aimed at aiding the accurate closed reduction of long-bone shaft fractures. The correct six DOF alignment between the bone fragments can be retrieved by manually repositioning a pair of reference frames constrained to the two extremities of the fixator so as to minimize the geometric distance, on the image plane, between planned/virtual landmarks and their observed/real counterparts. The reduction accuracy was positively validated in vitro in a pilot study that involved an orthopedic surgeon.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128099119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real time 3D geometry correction for folding screens or projectors with distorting lenses","authors":"Fabien Picarougne, Aurélien Milliat","doi":"10.1145/2993369.2996326","DOIUrl":"https://doi.org/10.1145/2993369.2996326","url":null,"abstract":"In this article, we describe a new method for real-time display, in a geometrically correct way, of a 3D scene on curved surfaces. This method differs from existing solutions in the literature by allowing its application with folding screens or projectors with distorting lenses. Our algorithm is not limited to a particular shape of the display surface and takes the position of the user into account to display an image that is geometrically correctly perceived from the user's viewpoint. The projection process of 3D objects is divided in three phases. The first two steps are independent from the 3D scene and act as a buffer that stores particular values and accelerate calculations of the third step. The execution of the later may then be performed in linear time in the number of vertices of the geometry to be displayed.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130615888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Medeiros, Eduardo Cordeiro, Daniel Mendes, Maurício Sousa, A. Raposo, Alfredo Ferreira, Joaquim Jorge
{"title":"Effects of speed and transitions on target-based travel techniques","authors":"Daniel Medeiros, Eduardo Cordeiro, Daniel Mendes, Maurício Sousa, A. Raposo, Alfredo Ferreira, Joaquim Jorge","doi":"10.1145/2993369.2996348","DOIUrl":"https://doi.org/10.1145/2993369.2996348","url":null,"abstract":"Travel on Virtual Environments is the simple action where a user moves from a starting point A to a target point B. Choosing an incorrect type of technique could compromise the Virtual Reality experience and cause side effects such as spatial disorientation, fatigue and cybersickness. The design of effective travelling techniques demands to be as natural as possible, thus real walking techniques presents better results, despite their physical limitations. Approaches to surpass these limitations employ techniques that provide an indirect travel metaphor such as point-steering and target-based. In fact, target-based techniques evince a reduction in fatigue and cybersickness against the point-steering techniques, even though providing less control. In this paper we investigate further effects of speed and transition on target-based techniques on factors such as comfort and cybersickness using a Head-Mounted Display setup.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudia Faita, F. Vanni, Camilla Tanca, E. Ruffaldi, M. Carrozzino, M. Bergamasco
{"title":"Investigating the process of emotion recognition in immersive and non-immersive virtual technological setups","authors":"Claudia Faita, F. Vanni, Camilla Tanca, E. Ruffaldi, M. Carrozzino, M. Bergamasco","doi":"10.1145/2993369.2993395","DOIUrl":"https://doi.org/10.1145/2993369.2993395","url":null,"abstract":"This paper investigates the use of Immersive Virtual Environment (IVE) to evaluate the process of emotion recognition from faces (ERF). ERF has been mostly probed by using still photographs resembling universal expressions. However, this approach does not reflect the vividness of faces. Virtual Reality (VR) makes use of animated agents, trying to overcome this issue by reproducing the inherent dynamic of facial expressions, but outside a natural environment. We suggest that a setup using IVE technology simulating a real scene in combination with virtual agents (VAs) displaying dynamic facial expressions should improve the study of ERF. To support our claim we carried out an experiment in which two groups of subjects had to recognize VAs facial expression of universal and basic emotions in IVE and No-IVE condition. The goal was to evaluate the impact of the immersion in VE for ERF investigation. Results showed that the level of immersion in IVE does not interfere with the recognition task and a high level of accuracy in facial recognition suggests that IVE can be used to investigate the process of ERF.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Webizing human interface devices for virtual reality","authors":"Daeil Seo, Doyeon Kim, Byounghyun Yoo, H. Ko","doi":"10.1145/2993369.2996307","DOIUrl":"https://doi.org/10.1145/2993369.2996307","url":null,"abstract":"Recently virtual reality (VR) technology has been widely distributed, but VR interaction devices supported in web environments are limited compared with in the traditional VR environment. In the traditional VR environment, the Virtual-Reality Peripheral Network (VRPN) provides a device-independent and network-transparent interface. To promote the development of WebVR applications with various interaction devices, a method like VRPN is required in the web environment as well. In this paper, we propose a webizing method for human interface devices and related events that serves as either VRPN messages or HTML DOM events to deal with interaction events.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"125 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120993028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}