W. Lages, Gustavo A. Arango, D. Laidlaw, J. Socha, D. Bowman
{"title":"Designing capsule, an input device to support the manipulation of biological datasets","authors":"W. Lages, Gustavo A. Arango, D. Laidlaw, J. Socha, D. Bowman","doi":"10.1109/3DUI.2016.7460067","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460067","url":null,"abstract":"In this paper we present the design process of Capsule, an inertial input device to support 3D manipulation of biological datasets. Our motivation is to improve the scientist's workflow during the analysis of 3D biological data such as proteins, CT scans or neuron fibers. We discuss the design process and possibilities for this device.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129464382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jia Luo, Patrick Kania, P. Banerjee, Shammema Sikder, C. Luciano, W. Myers
{"title":"A part-task haptic simulator for ophthalmic surgical training","authors":"Jia Luo, Patrick Kania, P. Banerjee, Shammema Sikder, C. Luciano, W. Myers","doi":"10.1109/3DUI.2016.7460069","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460069","url":null,"abstract":"This poster presents a part-task haptic simulator for ophthalmic surgical training developed for the MicrovisTouch simulation platform. This ophthalmic surgical simulator provides both realistic 3D visualization and haptic feedback. Trainees are able to learn interacting with a physics-based dynamic virtual eye model and receiving tactile feedback by handling virtual instruments while simulating a series of surgical tasks and exercises, including micro-dexterity and corneal incision. A pilot study was conducted on the micro-dexterity to measure its effectiveness to evaluate trainees' skills performing precise instrument motion required for ophthalmic surgical procedures.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130839240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DesktopGlove: A multi-finger force feedback interface separating degrees of freedom between hands","authors":"Merwan Achibet, Géry Casiez, M. Marchal","doi":"10.1109/3DUI.2016.7460024","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460024","url":null,"abstract":"In virtual environments, interacting directly with our hands and fingers greatly contributes to immersion, especially when force feedback is provided for simulating the touch of virtual objects. Yet, common haptic interfaces are unfit for multi-finger manipulation and only costly and cumbersome grounded exoskeletons do provide all the efforts expected from object manipulation. To make multi-finger haptic interaction more accessible, we propose to combine two affordable haptic interfaces into a bimanual setup named DesktopGlove. With this approach, each hand is in charge of different components of object manipulation: one commands the global motion of a virtual hand while the other controls its fingers for grasping. In addition, each hand is subjected to forces that relate to its own degrees of freedom so that users perceive a variety of haptic effects through both of them. Our results show that (1) users are able to integrate the separated degrees of freedom of DesktopGlove to efficiently control a virtual hand in a posing task, (2) DesktopGlove shows overall better performance than a traditional data glove and is preferred by users, and (3) users considered the separated haptic feedback realistic and accurate for manipulating objects in virtual environments.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116449010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Looking into HMD: A method of latency measurement for head mounted display","authors":"R. Kijima, Kento Miyajima","doi":"10.1109/3DUI.2016.7460064","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460064","url":null,"abstract":"Latency is an important specification of the Head Mounted Display (HMD). The dynamical characteristics of the display and that of the lag compensation is the non-negligible part in the remaining latency after the lag compensation for the state-of-the-art HMD. By examining the past method, it was revealed that the evaluation of the subjective view was necessary to grasp such values. The result of measurement proved the capability to evaluate such dynamical characteristics, as well as the average latency.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122719344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Cabral, Gabriel Roque, Mario Nagamura, Andre Montes, Eduardo Zilles Borba, C. Kurashima, M. Zuffo
{"title":"Batmen - Hybrid collaborative object manipulation using mobile devices","authors":"M. Cabral, Gabriel Roque, Mario Nagamura, Andre Montes, Eduardo Zilles Borba, C. Kurashima, M. Zuffo","doi":"10.1109/3DUI.2016.7460077","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460077","url":null,"abstract":"We present an interactive and collaborative 3D object manipulation system using off the shelf mobile devices coupled with Augmented Reality (AR) technology that allows multiple users to collaborate concurrently on a scene. Each user interested in participating in this collaboration uses both a mobile device running android and a desktop (or laptop) working in tandem. The 3D scene is visualized by the user in the desktop system. The changes in the scene viewpoint and the object manipulation are performed using a mobile device through object tracking. Multiple users can collaborate on object manipulation by using a laptop and a mobile device each. The system leverages user's knowledge of common tasks performed on current mobile devices using gestures. We built a prototype system that allows users to complete the requested tasks and performed an informal user study with experienced VR researchers to validate the system.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122927690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Let your fingers do the walking: A unified approach for efficient short-, medium-, and long-distance travel in VR","authors":"Zhixin Yan, R. Lindeman, Arindam Dey","doi":"10.1109/3DUI.2016.7460027","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460027","url":null,"abstract":"The tradeoff between speed and precision is one of the challenging problems of travel interfaces. Sometimes users want to travel long distances (e.g., fly) and care less about precise movement, while other times they want to approach nearby objects in a more-precise way (e.g., walk), and care less about how quickly they move. Between these two extremes there are scenarios when both speed and precision become equally important. In real life, we often seamlessly combine these modes. However, most VR systems support a single travel metaphor, which may only be good for one range of travel, but not others. We present a new VR travel framework which supports three separate multi-touch travel techniques, one for each distance range, but that all use the same device. We use a unifying metaphor of the user's fingers becoming their legs for each of the techniques. We are investigating the usability and user acceptance of the fingers-as-legs metaphor, as well as the efficiency and naturalness of switching between the different travel modes. We conducted an experiment focusing on user performance using the three travel modes, and compared our multi-touch, gesture-based approach with a traditional Gamepad travel interface. The results suggest that participants using a Gamepad interface are more time efficient. However, the quality of completing the tasks with the two input devices was similar, while ForcePad user response was faster for switching between travel modes.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122126369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aryabrata Basu, Catherine Ball, Benjamin Manning, K. Johnsen
{"title":"Effects of user physical fitness on performance in virtual reality","authors":"Aryabrata Basu, Catherine Ball, Benjamin Manning, K. Johnsen","doi":"10.1109/3DUI.2016.7460057","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460057","url":null,"abstract":"A person's level of physical fitness can affect their health and many other factors in their lives. However, little is known about the effect of physical fitness on factors relevant to virtual environments. Towards addressing this knowledge gap, we performed a research study examining the relationship of several physical fitness measures with performance, presence, and simulator sickness during use of an HMD-based maze-type virtual environment. We recorded the trajectory of each participant through the maze. Following the virtual environment, participants reported simulator sickness, presence, and provided written and verbal feedback. Our analysis of the data shows a positive correlation between self-reported physical fitness and user performance. Further research is necessary to establish a causal relationship, and methods to make use of this new information in the design of virtual environments.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128245019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Zielasko, Sven Horn, Sebastian Freitag, B. Weyers, T. Kuhlen
{"title":"Evaluation of hands-free HMD-based navigation techniques for immersive data analysis","authors":"Daniel Zielasko, Sven Horn, Sebastian Freitag, B. Weyers, T. Kuhlen","doi":"10.1109/3DUI.2016.7460040","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460040","url":null,"abstract":"To use the full potential of immersive data analysis when wearing a head-mounted display, users have to be able to navigate through the spatial data. We collected, developed and evaluated 5 different hands-free navigation methods that are usable while seated in the analyst's usual workplace. All methods meet the requirements of being easy to learn and inexpensive to integrate into existing workplaces. We conducted a user study with 23 participants which showed that a body leaning metaphor and an accelerometer pedal metaphor performed best. In the given task the participants had to determine the shortest path between various pairs of vertices in a large 3D graph.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129640126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg
{"title":"Towards a comparative evaluation of visually guided physical reach motions during 3D interactions in real and virtual environments","authors":"Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg","doi":"10.1109/3DUI.2016.7460058","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460058","url":null,"abstract":"In an initial study, we characterize the properties of human reach motion with and without visual guidance in real and virtual worlds in interaction space. We aim to understand how the perceptual characteristics between real and virtual worlds affect physical reaches during 3D interaction. Typically, participants spatially reach to the perceived location of objects in 3D to perform selection and manipulation activities. These physical reach motions include those of virtual assembly tasks or rehabilitation exercises in which the participants only have approximate perceptual information in the virtual world compared to the real world situation due to technological limitations such as minimal visual field of view and resolution as well as latency and jitter associated with physical movements. In this poster, we try to understand how the motor responses of participants differ between visually guided versus non-visually guided situations. We compared and contrasted the motor component of 3D interaction between the virtual and physical world by investigating factors such as accuracy and velocity of each reaching task.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120909076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jian Ma, Prathamesh Potnis, Alec G. Moore, Ryan P. McMahan
{"title":"VUME: The voluntary-use methodology for evaluations","authors":"Jian Ma, Prathamesh Potnis, Alec G. Moore, Ryan P. McMahan","doi":"10.1109/3DUI.2016.7460042","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460042","url":null,"abstract":"In an attempt to better understand how controlled research results impact actual voluntary use of 3D user interfaces (3D Uls), we developed a new evaluation approach. Using this approach, we conducted two studies evaluating two head-mounted displays (HMDs) - a Sensics zSight and an Oculus Rift Development Kit 1 (DK1). The results of the first study indicate that the DK1 affords significantly better user performances. In the second study, we used a between-subjects design to determine if participants would voluntarily explore and interact with a virtual environment more with the DK1 than the zSight. We did not find a significant difference between the two HMDs, but statistically proved that the HMDs were equivalent. This indicates that results found in controlled evaluations do not always play a significant role in the voluntary use of a 3D UI.","PeriodicalId":175060,"journal":{"name":"2016 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134557196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}