{"title":"Optical Sight Metaphor for Virtual Environments","authors":"A. Sherstyuk, J. Pair, Anton Treskunov","doi":"10.1109/3DUI.2007.340772","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340772","url":null,"abstract":"Optical sight is a new metaphor for selecting distant objects or precisely pointing at close objects in virtual environments. Optical sight combines ray-casting, hand based camera control, and variable zoom into one virtual instrument that can be easily implemented for a variety of virtual, mixed, and augmented reality systems. The optical sight can be modified into a wide family of tools for viewing and selecting objects. Optical sight scales well from desktop environments to fully immersive systems","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124497064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Effects of Environment Density and Target Visibility on Object Selection in 3D Virtual Environments","authors":"Lode Vanacken, Tovi Grossman, K. Coninx","doi":"10.1109/3DUI.2007.340783","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340783","url":null,"abstract":"Object selection is a primary interaction technique which must be supported by any interactive three-dimensional virtual reality application. Although numerous techniques exist, few have been designed to support the selection of objects in dense target environments, or the selection of objects which are occluded from the user's viewpoint. There is, thus, a limited understanding on how these important factors will affect selection performance. In this paper, we present a set of design guidelines and strategies to aid the development of selection techniques which can compensate for environment density and target visibility. Based on these guidelines, we present two techniques, the depth ray and the 3D bubble cursor, both augmented to allow for the selection of fully occluded targets. In a formal experiment, we evaluate the relative performance of these techniques, varying both the environment density and target visibility. The results found that both of these techniques outperformed a baseline point cursor technique, with the depth ray performing best overall.","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125935885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tokens and Board User Interface Based on a Force-Torque Sensing Technique","authors":"B. Panchaphongsaphak, R. Riener","doi":"10.1109/3DUI.2007.340787","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340787","url":null,"abstract":"This paper proposes a technique of a tangible user interface (TUI) for \"tokens and board\" games. The interface allows users to play the games through a set of physical tokens on an interactive board. When a player moves a token, the system identifies the move and locations by interpreting the force-torque data acting on the playing board. The proposed technique uses only one central six-axis force-torque sensor embedded beneath the playing board. The methods of minimizing the location detection errors and signal noise were presented. We demonstrated how to implement our technique through the user interface of virtual chess games as an example. The experimental results confirm the validity of our technique and show potential uses of the device for \"tokens and board\" games","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126693779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tactile Feedback at the Finger Tips for Improved Direct Interaction in Immersive Environments","authors":"R. Scheibe, Mathias Moehring, B. Fröhlich","doi":"10.1109/VR.2007.352508","DOIUrl":"https://doi.org/10.1109/VR.2007.352508","url":null,"abstract":"We present a new tactile feedback system for finger-based interactions in immersive virtual reality applications. The system consists of tracked thimbles for the fingers with shape memory alloy wires wrapped around each thimble. These wires touch the inside of the finger tips and provide an impression when they are shortened. We complement the impression on the finger tips by a subsequent vibration of the wire to generate a perceivable tactile stimulus over a longer period of time. The shortening and relaxation process of the wires as well as the vibration is controlled through a micro-controller receiving commands from the virtual reality application. We use the tactile feedback for communicating linger contacts with virtual objects in an application prototype for usability and reachability studies of car interiors. Our experiments with the system and an initial pilot study revealed that this type of feedback helps users to perform direct manipulation tasks with more reliability. Our users also preferred the system with tactile feedback over a system without the feedback","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdelmajid Kadri, A. Lécuyer, Jean-Marie Burkhardt
{"title":"The Visual Appearance of User's Avatar Can Influence the Manipulation of Both Real Devices and Virtual Objects","authors":"Abdelmajid Kadri, A. Lécuyer, Jean-Marie Burkhardt","doi":"10.1109/3DUI.2007.340767","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340767","url":null,"abstract":"This paper describes two experiments conducted to study the influence of visual appearance of user's avatar (or 3D cursor) on the manipulation of both interaction devices and virtual objects in 3D virtual environments (VE). In both experiments, participants were asked to pick up a virtual cube and place it at a random location in a VE. The first experiment showed that the visual appearance of a 3D cursor could influence the participants in the way they manipulated the real interaction device. The participants changed the orientation of their hand as function of the orientation suggested visually by the shape of the 3D cursor. The second experiment showed that one visual properly of the avatar (i.e., the presence or absence of a directional cue) could influence the way participants picked up the cube in the VE. When using avatars or 3D cursors with a strong directional cue (e.g., arrows pointing to the left or right), participants generally picked up the cube by a specific side (e.g., right or left side). When using 3D cursors with no main directional cue, participants picked up the virtual cube more frequently by its front or top side. Taken together our results suggest that some visual aspects (such as directional cues) of avatars or 3D cursors chosen to display the user in the VE could partially determine his/her behaviour during manipulation tasks. Such an influence could be used to prevent wrong uses or to favour optimal uses of manipulation interfaces such as haptic devices in virtual environments","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133303266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-section Projector: Interactive and Intuitive Presentation of 3D Volume Data using a Handheld Screen","authors":"K. Hirota, Yuya Saeki","doi":"10.1109/3DUI.2007.340775","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340775","url":null,"abstract":"A novel display system that presents cross-sectional image of 3D volume data in an intuitive and interactive way is proposed. A screen panel is manipulated by a user; the position and orientation of the screen are measured by sensors; cross-sectional image of 3D volume data with the screen plane is generated and projected on the screen panel. By supporting this interaction up to a relatively high frequency motion of the screen panel, volumetric image of the 3D data is provided to user when the screen panel is quickly moved. The integrated presentation of cross-sectional and volumetric images is thought to mutually complement drawbacks each other; the volumetric image provides holistic view on the spatial structure, while the cross-sectional image provides more precise information among the volume data. A sensing system to measure the motion of screen plane using laser displacement sensors is designed, and a method to cancel the delay time from measurement to projection by predicting the motion of the screen panel is devised. Through implementation of a prototype system, feasibility of our approach is demonstrated, and future works that are required to improve the system is discussed","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116508050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Empirical Comparison of Task Sequences for Immersive Virtual Environments","authors":"Ryan P. McMahan, D. Bowman","doi":"10.1109/3DUI.2007.340770","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340770","url":null,"abstract":"System control - the issuing of commands - is a critical, but largely unexplored task in 3D user interfaces (3DUIs) for immersive virtual environments (IVEs). The task sequence (the order of operations in a system control task) is an important aspect of the design of a system control interface (SCI), because it affects the way the user must think about accomplishing the task. Most command line interfaces are based on the action-object task sequence (e.g. \"rm foo.txt\"). Most graphical user interfaces (GUIs) are based on the object-action task sequence (e.g. click on an icon then select \"delete\" from a pop-up menu). An SCI for an IVE should be transparent and induce minimal cognitive load, but it is not clear which task sequences support this goal. We designed an experiment using an interior design application context to determine the cognitive loads induced by various task sequences in IVEs. By subtracting the expected time for a user to complete the task from the total time, we have estimated the cognitive time, dependent only on task sequence. Our experiment showed that task sequence has a significant effect on the cognitive loads induced in IVEs. The object-action sequence and similar task sequences induce smaller cognitive loads than those induced by the action-object sequence. These results can be used to create guidelines for creating 3DUIs for IVEs","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124222901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seven League Boots: A New Metaphor for Augmented Locomotion through Moderately Large Scale Immersive Virtual Environments","authors":"V. Interrante, B. Ries, Lee Anderson","doi":"10.1109/3DUI.2007.340791","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340791","url":null,"abstract":"When an immersive virtual environment represents a space that is larger than the available space within which a user can travel by directly walking, it becomes necessary to consider alternative methods for traveling through that space. The traditional solution is to require the user to travel 'indirectly', using a device that changes his viewpoint in the environment without actually requiring him to move - for example, a joystick. However, other solutions involving variations on direct walking are also possible. In this paper, we present a new metaphor for natural, augmented direct locomotion through moderately large-scale immersive virtual environments (IVEs) presented via head mounted display systems, which we call seven league boots. The key characteristic of this method is that it involves determining a user's intended direction of travel and then augmenting only the component of his or her motion that is aligned with that direction. After reviewing previously proposed methods for enabling intuitive locomotion through large IVEs, we begin by describing the technical implementation details of our novel method, discussing the various alternative options that we explored and parameters that we varied in an attempt to attain optimal performance. We then present the results of a pilot observer experiment that we conducted in an attempt to obtain objective, qualitative insight into the relative strengths and weaknesses of our new method, in comparison to the three most commonly used alternative locomotion methods: flying, via use of a wand; normal walking, with a uniform gain applied to the output of the tracker; and normal walking without gain, but with the location and orientation of the larger virtual environment periodically adjusted relative to position of the participant in the real environment. In this study we found, among other things, that for travel down a long, straight virtual hallway, participants overwhelmingly preferred the seven league boots method to the other methods, overall","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127620671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of Travel Techniques in a Complex, Multi-Level 3D Environment","authors":"Evan A. Suma, Sabarish V. Babu, L. Hodges","doi":"10.1109/3DUI.2007.340788","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340788","url":null,"abstract":"This paper reports on a study that compares three different methods of travel in a complex, multi-level virtual environment using a between-subjects design. A real walking travel technique was compared to two common virtual travel techniques. Participants explored a two-story 3D maze at their own pace and completed four post-tests requiring them to remember different aspects of the environment. Testing tasks included recall of objects from the environment, recognition of objects present and not present, sketching of maps, and placing objects on a map. We also analyzed task completion time and collision data captured during the experiment session. Participants that utilized the real walking technique were able to place more objects correctly on a map, completed the maze faster, and experienced fewer collisions with the environment. While none of the conditions outperformed each other on any other tests, our results indicate that for tasks involving the naive exploration of a complex, multi-level 3D environment, the real walking technique supports a more efficient exploration than common virtual travel techniques. While there was a consistent trend of better performance on our measures for the real walking technique, it is not clear from our data that the benefits of real walking in these types of environments always justify the cost and space trade-offs of maintaining a wide-area tracking system","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125192642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Templeman, L. Sibert, R. Page, Patricia S. Denbrook
{"title":"Pointman - A Device-Based Control for Realistic Tactical Movement","authors":"J. Templeman, L. Sibert, R. Page, Patricia S. Denbrook","doi":"10.1109/3DUI.2007.340790","DOIUrl":"https://doi.org/10.1109/3DUI.2007.340790","url":null,"abstract":"Pointmantrade is a new virtual locomotion control that uses a conventional dual joystick gamepad in combination with a tracked head-mounted display and sliding foot pedals. Unlike the control mappings of a conventional gamepad, Pointman allows users to specify their direction of movement independently from the heading of the upper body. The motivation for this work is to develop a virtual infantry training simulator that is inexpensive, portable, and allows the user to execute realistic tactical infantry movements. Tactical movements rely heavily on the ability to scan while moving along a path, which requires the ability to independently coordinate course and heading. Conventional gamepad control mappings confound course and heading, and facilitate moving sideways and spiraling toward or away from targets. Pointman was derived from an analysis of how people move and coordinate actions in the real and virtual worlds","PeriodicalId":301785,"journal":{"name":"2007 IEEE Symposium on 3D User Interfaces","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124498772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}