Mar González-Franco, M. Hall, Devon David Hansen, K. Jones, Paul Hannah, P. Bermell-Garcia
{"title":"Framework for remote collaborative interaction in virtual environments based on proximity","authors":"Mar González-Franco, M. Hall, Devon David Hansen, K. Jones, Paul Hannah, P. Bermell-Garcia","doi":"10.1109/3DUI.2015.7131746","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131746","url":null,"abstract":"This paper presents a framework for remote collaborations within distributed teams where there is a need to communicate, share and modify information. The use of virtual environments is proposed to enhance Computer Supported Cooperative Work (CSCW). The current framework gravitates along the concepts of self-representation and proximity to implement different levels of interactivity both in the communication and in the collaboration axis. Proxemics is used as the rule for communications allows a high scalability for densely populated collaborative scenarios. Furthermore the use of proximity as the main rule to interact with the shared content establishes different levels of collaboration.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D virtual hand pointing with EMS and vibration feedback","authors":"Max Pfeiffer, W. Stuerzlinger","doi":"10.1109/3DUI.2015.7131735","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131735","url":null,"abstract":"Pointing is one of the most basic interaction methods for 3D user interfaces. Previous work has shown that visual feedback improves such actions. Here we investigate if electrical muscle stimulation (EMS) and vibration is beneficial for 3D virtual hand pointing. In our experiment we used a 3D version of a Fitts' task to compare visual feedback, EMS, vibration, with no feedback. The results demonstrate that both EMS and vibration provide reasonable addition to visual feedback. We also found good user acceptance for both technologies.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116719898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Freehand vs. micro gestures in the car: Driving performance and user experience","authors":"Renate Häuslschmid, Benjamin Menrad, A. Butz","doi":"10.1109/3DUI.2015.7131749","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131749","url":null,"abstract":"Until now, freehand and micro gestures have only been investigated separately. We conducted a driving simulator study to investigate on the effects on the driving performance when controlling a music player and the user experience provided. Subjects felt that stimulation, control, popularity, and physical-form were addressed by both gesture types, but slightly better by freehand gestures. But micro gestures were rated notably higher regarding their perceived degree of autonomy. Regarding driving performance, deteriorations were found for both gesture types. Results indicate, freehand gestures impair lateral control while micro gestures delay steering.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116743896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exact interactions executed with new technique estimating positions of virtual objects by using human body movements","authors":"Masahiro Suzuki, H. Unno, K. Uehira","doi":"10.1109/3DUI.2015.7131762","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131762","url":null,"abstract":"We evaluated the exactness of interactions using our proposed technique for applications in which users' bodies interacted with the virtual objects presented with 3-D displays in front of them. Interactive applications require interactions to be executed when users' bodies are at the positions of virtual objects; however, conventional techniques that execute interactions when the bodies are at the positions calculated from the binocular disparity make it difficult to meet this requirement because the calculated positions are often different from the positions of virtual objects. The proposed technique estimates the positions of virtual objects by using human body movements, and meets the requirement by executing interactions when users' bodies are at the estimated positions. We conducted an experiment in which users actually interacted with virtual objects, and the results indicated that more interactions succeeded when the proposed technique was used than when conventional techniques were used. We demonstrated that interactions using the proposed technique were exact.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128235923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emilie Guy, Parinya Punpongsanon, D. Iwai, Kosuke Sato, T. Boubekeur
{"title":"LazyNav: 3D ground navigation with non-critical body parts","authors":"Emilie Guy, Parinya Punpongsanon, D. Iwai, Kosuke Sato, T. Boubekeur","doi":"10.1109/3DUI.2015.7131725","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131725","url":null,"abstract":"With the growing interest in natural input devices and virtual reality, mid-air ground navigation is becoming a fundamental interaction for a large collection of application scenarios. While classical input devices (e.g., mouse/keyboard, gamepad, touchscreen) have their own ground navigation standards, natural input techniques still lack acknowledged mechanisms for travelling in a 3D scene. In particular, for most applications, navigation is not the primary interaction. Thus, the user should navigate in the scene while still being able to perform other interactions with her hands, and observe the displayed content by moving her eyes and locally rotating her head. Since most ground navigation scenarios require only two degrees of freedom to move forward or backward and rotate the view to the left or to the right, we propose LazyNav a mid-air ground navigation control model which lets the users hands, eyes or local head orientation completely free, making use of a single pair of the remaining tracked body elements to tailor the navigation. To this end, we design several navigation body motions and study their desired properties, such as being easy to discover, easy to control, socially acceptable, accurate and not tiring. We also develop several assumptions about motions design for ground navigation and evaluate them. Finally, we highlight general advices on mid-air ground navigation techniques.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127093556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"March-and-Reach: A realistic ladder climbing technique","authors":"Chengyuan Lai, Ryan P. McMahan, James Hall","doi":"10.1109/3DUI.2015.7131719","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131719","url":null,"abstract":"In most 3D applications, travel is limited to horizontal movement. A few 3D travel techniques allow for vertical travel, but most of them rely on “magic” abilities, such as flying. We sought to develop a realistic vertical travel technique for climbing ladders. We have developed March-and-Reach, with which the user marches in place to virtually step on lower ladder rungs while reaching to virtually grab higher rungs. We conducted a within-subject study to compare March-and-Reach to two prior ladder-climbing techniques. Results indicate that users consider and treat March-and-Reach as the most realistic ladder climbing technique.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122553885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jérémy Lacoche, Morgan Le Chénéchal, S. Chalmé, J. Royan, Thierry Duval, V. Gouranton, E. Maisel, B. Arnaldi
{"title":"Dealing with frame cancellation for stereoscopic displays in 3D user interfaces","authors":"Jérémy Lacoche, Morgan Le Chénéchal, S. Chalmé, J. Royan, Thierry Duval, V. Gouranton, E. Maisel, B. Arnaldi","doi":"10.1109/3DUI.2015.7131729","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131729","url":null,"abstract":"This paper aims at reducing the ocular discomfort created by stereoscopy due to the effect called “frame cancellation”, for movies and interactive applications. This effect appears when a virtual object in negative parallax (front of the screen) is clipped by the screen edges; stereopsis cue lets observers perceive the object popping-out from the screen while occlusion cue provides observers with an opposite signal. Such a situation is not possible in the real world. This explains some visual discomfort for observers and leads to a poor depth perception of the scene. This issue is directly linked to the physical limitations of the display size that may not cover the entire field of view of the observer. To deal with these physical constraints we introduce two new methods in the context of interactive applications. The first method consists in two new rendering effects based on progressive transparency that aim to preserve the popping-out effect of the stereo. The second method focuses on adapting the interaction of the user, not allowing him to place virtual objects in an area subject to frame cancellation. Both methods have been evaluated and have shown a good efficiency in comparison to the state of the art approaches.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131538265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conflict resolution models on usefulness within multi-user collaborative virtual environments","authors":"Aida Erfanian, Yaoping Hu","doi":"10.1109/3DUI.2015.7131743","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131743","url":null,"abstract":"Conflict resolution models play key roles in coordinating simultaneous interactions in multi-user collaborative virtual environments (VEs). Currently, conflict resolution models are first-come-first-serve (FCFS) and dynamic priority (DP). Known to be unfair, the FCFS model grants all interaction opportunities to the agilest user. Instead, the DP model permits all users the perception of equality in interaction. Nevertheless, it remains unclear whether the perception of equality in interaction could impact the usefulness of multi-user collaborative VEs. Thus, this present work compared the FCFS and DP models for underlying the usefulness of multi-user collaborative VEs. This comparison was undertaken based on a metrics of usefulness (i.e., task focus, decision time, and consensus), which we defined according to the ISO/IEC 205010:2011 standard. This definition remedied the current metrics of usefulness that measures actually effectiveness and efficiency of target technologies, instead of their usefulness. On our multi-user collaborative VE, we observed that the DP model yielded significantly lower decision time and higher consensus than the FCFS model. There was, however, no significant difference of task focus between both models. These observations imply a potential to improve multi-user collaborative VEs.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121719356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multi-touch finger gesture based low-fatigue VR travel framework","authors":"Zhixin Yan, R. Lindeman","doi":"10.1109/3DUI.2015.7131766","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131766","url":null,"abstract":"In this poster, we present a low-fatigue VR travel framework which provides smooth transition between three types of travel metaphors, walking, Segway and surfboard, with one multi-touch device. Our thought is that by mapping non-dominate-hand gestures to lower body motion, users can travel in a low-fatiguing and intuitive way while working on other VR tasks like picking up objects or moving virtual widgets with their dominant hand.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"289 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114952356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henrique Taunay, Vinicius Rodrigues, Rodrigo Braga, Pablo Elias, Luciano P. Reis, A. Raposo
{"title":"A spatial partitioning heuristic for automatic adjustment of the 3D navigation speed in multiscale virtual environments","authors":"Henrique Taunay, Vinicius Rodrigues, Rodrigo Braga, Pablo Elias, Luciano P. Reis, A. Raposo","doi":"10.1109/3DUI.2015.7131726","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131726","url":null,"abstract":"With technological evolution, 3D virtual environments continuously increase in complexity; such is the case with multiscale environments, i.e., environments that contain groups of objects with extremely diverging levels of scale. Such scale variation makes it difficult to interactively navigate in this kind of environment since it demands repetitive and unintuitive adjustments in either velocity or scale, according to the objects that are close to the observer, in order to ensure a comfortable and stable navigation. Recent efforts have been developed working with heavy GPU based solutions that are not feasible depending on the complexity of the scene. We present a spatial partitioning heuristic for automatic adjustment of the 3D navigation speed in a multiscale virtual environment minimizing the workload and transferring it to the CPU, allowing the GPU to focus on rendering. With the scene topological information obtained in a preprocessing phase, we are able to obtain, in real-time, the closest object and the visible objects, which allows us to propose two different heuristics for automatic navigation velocity. Finally, in order to verify the usability gain in the proposed approaches, user tests were conducted to evaluate the accuracy and efficiency of the navigation, and users' subjective satisfaction. Results were particularly significant for demonstrating accuracy gain in navigation while using the proposed approaches for both laymen and advanced users.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115347619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}