{"title":"Evaluating depth perception of photorealistic mixed reality visualizations for occluded objects in outdoor environments","authors":"Arindam Dey, Andrew Cunningham, C. Sandor","doi":"10.1145/1889863.1889911","DOIUrl":"https://doi.org/10.1145/1889863.1889911","url":null,"abstract":"Enabling users to accurately perceive the correct depth of occluded objects is one of the major challenges in user interfaces for Mixed Reality (MR). In this paper, we present an evaluation of depth perception in handheld outdoor mixed reality environment in far-field distances through two photorealistic visualizations of occluded objects (X-ray and Melt) in the presence and absence of a depth cue.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121866636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan A. Suma, Samantha L. Finkelstein, Seth Clark, P. Goolkasian, L. Hodges
{"title":"Effects of travel technique and gender on a divided attention task in a virtual environment","authors":"Evan A. Suma, Samantha L. Finkelstein, Seth Clark, P. Goolkasian, L. Hodges","doi":"10.1109/3DUI.2010.5444726","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444726","url":null,"abstract":"We report a user study which compared four virtual environment travel techniques using a divided attention task. Participants used either real walking, gaze-directed, pointing-directed, or torso-directed travel to follow a target through an environment while simultaneously responding to auditory stimuli. In addition to travel technique, we investigated gender as a between-subjects variable and task difficulty (simple or complex) and task type (single or divided) as within-subjects variables. Real walking allowed superior performance over the pointing-directed technique on measures of navigation task performance and recognition of stimuli presented during navigation. This indicates that participants using real walking may have had more spare cognitive capacity to process and encode stimuli than those using pointing-directed travel. We also found a gender-difficulty interaction where males performed worse and responded slower to the attention task when the spatial task was more difficult, but no differences were observed for females between difficulty levels. While these results may be pertinent for the design of virtual environments, the nature and goal of the virtual environment tasks must be carefully considered to determine whether similar effects on performance can be expected under different conditions.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130748730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Maier, M. Tönnis, G. Klinker, A. Raith, M. Drees, Fritz Kuhn
{"title":"What do you do when two hands are not enough? interactive selection of bonds between pairs of tangible molecules","authors":"P. Maier, M. Tönnis, G. Klinker, A. Raith, M. Drees, Fritz Kuhn","doi":"10.1109/3DUI.2010.5444716","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444716","url":null,"abstract":"For molecular modeling, chemical structures have to be understood and imagined both in their three-dimensional spatial extent and in their dynamic behavior. We have developed an AR-based system for tangible interaction with molecules using optical markers. When users bring several molecules close to one another, potential bonds are shown and the molecules dynamically change their 3D structure according to potential chemical reactions. A problem arises when users also need to select one such bond from of a multitude of potential bonds while already using both hands to manipulate the molecules. We present two gesture-based techniques, shake-based and proximity-based to solve this problem. We report on user tests evaluating these techniques with respect to speed, precision and user acceptance.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133469623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Visell, Severin Smith, Alvin W. Law, R. Rajalingham, J. Cooperstock
{"title":"Contact sensing and interaction techniques for a distributed, multimodal floor display","authors":"Y. Visell, Severin Smith, Alvin W. Law, R. Rajalingham, J. Cooperstock","doi":"10.1109/3DUI.2010.5444718","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444718","url":null,"abstract":"This paper presents a novel interface and set of techniques enabling users to interact via the feet with augmented floor surfaces. The interface consists of an array of instrumented floor tiles distributed over an area of several square meters. Intrinsic force sensing is used to capture foot-floor contact at resolutions as fine as 1 cm, for use with floor-based multimodal touch surface interfaces. We present the results of a preliminary evaluation of the usability of such a display.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125143131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual collision notification","authors":"K. J. Blom, Steffi Beckhaus","doi":"10.1109/3DUI.2010.5444723","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444723","url":null,"abstract":"Virtual collisions are reportedly an important part of creating effective experiences of virtual environments. Although they are considered vital, collision responses for travel in a virtual environment are not well understood. The effectivity of methods for notifying users of collisions has not been explored in the context of travel and the methods used are often not even reported. We present novel notification methods, based on haptic feedback via an output device embedded in the floor of our display and an initial study that compares nine notification methods. In a comparative study, our haptic floor feedback methods were preferred, followed by a thump sound and a wand device rumble. The results indicate that methods that are context appropriate, e.g. haptic responses and audio cues similar to real collisions with the object, are clearly preferable for realistic impressions of a world and of collisions.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"37 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131694011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The design and evaluation of 3D positioning techniques for multi-touch displays","authors":"Anthony Martinet, Géry Casiez, L. Grisoni","doi":"10.1109/3DUI.2010.5444709","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444709","url":null,"abstract":"Multi-touch displays represent a promising technology for the display and manipulation of 3D data. To fully exploit their capabilities, appropriate interaction techniques must be designed. In this paper, we explore the design of free 3D positioning techniques for multi-touch displays to exploit the additional degrees of freedom provided by this technology. Our contribution is two-fold: first we present an interaction technique to extend the standard four view-ports technique found in commercial CAD applications, and second we introduce a technique designed to allow free 3D positioning with a single view of the scene. The two techniques were evaluated in a preliminary experiment. The first results incline us to conclude that the two techniques are equivalent in term of performance showing that the Z-technique provides a real alternative to the statu quo viewport technique.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128623351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Audio haptic feedbacks for an acquisition task in a multi-target context","authors":"B. Ménélas, L. Picinali, B. Katz, P. Bourdot","doi":"10.1109/3DUI.2010.5444722","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444722","url":null,"abstract":"This paper presents the use of audio and haptic feedbacks to reduce the load of the visual channel in interaction tasks within virtual environments. An examination is made regarding the exploitation of audio and/or haptic cues for the acquisition of a desired target in an environment containing multiple and obscured distractors. This study compares different ways of identifying and locating a specified target among others by the mean of either audio, haptic, or both feedbacks rendered simultaneously. The analysis of results and subjective user comments indicate that active haptic and combined audio/haptic conditions offer better results when compared to the audio only condition. Moreover, that the association of haptic and audio feedback presents a real potential for the completion of the task.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116975129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RealNav: Exploring natural user interfaces for locomotion in video games","authors":"Brian M. Williamson, C. A. Wingrave, J. Laviola","doi":"10.1109/3DUI.2010.5444737","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444737","url":null,"abstract":"We present a reality-based locomotion study directly applicable to video game interfaces; specifically, locomotion control of the quarterback in American football. Focusing on American football drives requirements and ecologically grounds the interface tasks of: running down the field, maneuvering in a small area, and evasive gestures such as spinning, jumping, and the “juke.”","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115049307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented foam sculpting for capturing 3D models","authors":"M. Marner, B. Thomas","doi":"10.1109/3DUI.2010.5444720","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444720","url":null,"abstract":"This paper presents a new technique to simultaneously model in both the physical and virtual worlds. The intended application domain for this technique is industrial design. A designer physically sculpts a 3D model from foam using a hand-held hot wire foam cutter. Both the foam and cutting tool are tracked, allowing the system to digitally replicate the sculpting process to produce a matching 3D virtual model. Spatial Augmented Reality is used to project visualizations onto the foam. Inspired by the needs of industrial designers, we have developed two visualizations for sculpting specific models: Target, which shows where foam needs to be removed to produce a model, and Cut Animation, which projects the paths for cuts to be made to reproduce a previous artifact. A third visualization of the wireframe of the generated model is projected onto the foam and used for verification. The final visualization employs 3D procedural textures such as a wood grain texture, providing a simulation of volumetric rendering. Volumetric rendering techniques such as this provide a more natural look that is projected onto the foam. Once the object has been modeled physically and virtually, the designer is able to annotate and paint the finished model. The system has been evaluated through a user study conducted with students from the School of Industrial Design at the University of South Australia.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124473605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The implementation of a novel walking interface within an immersive display","authors":"David Swapp, Julian Williams, A. Steed","doi":"10.1109/3DUI.2010.5444717","DOIUrl":"https://doi.org/10.1109/3DUI.2010.5444717","url":null,"abstract":"Locomotion interfaces pose a large challenge for designers of immersive virtual environments. While treadmills and other actuated interfaces can be integrated with spatial displays or head-mounted displays, they are expensive or they constrain walking to one direction. In this technical note we demonstrate the integration of a passive walking interface integrated with a CAVE™-like display. The Wizdish is a new concept for walking interfaces. Users wear low-friction shoes and “walk” on a dish-shaped apparatus. By tracking the feet and integrating a redirection algorithm we have constructed a locomotion interface that reproduces some aspects of walking whilst removing the need for the user to operate any hand-held controller.","PeriodicalId":144717,"journal":{"name":"2010 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125140976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}