{"title":"Volumetric calibration and registration of multiple RGBD-sensors into a joint coordinate system","authors":"S. Beck, B. Fröhlich","doi":"10.1109/3DUI.2015.7131731","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131731","url":null,"abstract":"We present an integrated approach for the calibration and registration of color and depth (RGBD) sensors into a joint coordinate system. Our application domain is 3D telepresence where users in front of a three-dimensional display need to be captured from all directions. The captured data is used to virtually reconstruct the group of people at a remote location. One key requirement of such applications is that contributions from different color and depth cameras match, as closely as possible, in spatially overlapping or adjacent regions. Our method employs a tracked checker-board to establish a number of correspondences between positions in color and depth camera space and in world space. These correspondences are used to construct a single calibration and registration volume per RGBD sensor which maps raw depth sensor values in a single step into a joint coordinate system and to their associated color values. This approach considerably reduces reconstruction latency by omitting expensive image rectification processes during runtime. Furthermore, our evaluation demonstrates a high measurement accuracy with an average 3D error below 3 mm and an average texture deviation smaller than 0.5 pixels for a space of about 1.5 m × 1.8 m × 1.5 m.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122053377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Characterizing embodied interaction in First and Third Person Perspective viewpoints","authors":"H. Debarba, E. Molla, B. Herbelin, R. Boulic","doi":"10.1109/3DUI.2015.7131728","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131728","url":null,"abstract":"Third Person Perspective (3PP) viewpoints have the potential to expand how one perceives and acts in a virtual environment. They offer increased awareness of the posture and of the surrounding of the virtual body as compared to First Person Perspective (1PP). But from another standpoint, 3PP can be considered as less effective for inducing a strong sense of embodiment into a virtual body. Following an experimental paradigm based on full body motion capture and immersive interaction, this study investigates the effect of perspective and of visuomotor synchrony on the sense of embodiment. It provides evidence supporting a high sense of embodiment in both 1PP and 3PP during engaging motor tasks, as well as guidelines for choosing the optimal perspective depending on location of targets.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115839944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mi Feng, R. Lindeman, H. Abdel-Moati, Jacob C. Lindeman
{"title":"Haptic ChairIO: A system to study the effect of wind and floor vibration feedback on spatial orientation in VEs","authors":"Mi Feng, R. Lindeman, H. Abdel-Moati, Jacob C. Lindeman","doi":"10.1109/3DUI.2015.7131744","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131744","url":null,"abstract":"In this poster, we present the design, implementation, and evaluation plan of a system called Haptic ChairIO. A design space is first introduced, classifying sensory cues, and describing the potential usage of haptic cues on cognitive tasks in virtual environments (VEs). Then follows the design and implementation of Haptic ChairIO, which is extendable in providing various sensory cue types, consisting of a VR simulation, chair-based motion-control input, and multi-sensory output, including visual, audio, wind, and floor vibration feedback. A plan of evaluation has been made to study the effect of wind and floor vibration on spatial orientation in VEs.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134144705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tangible virtual kitchen for the rehabilitation of Alzheimer's patients","authors":"Thuong N. Hoang, D. Foloppe, P. Richard","doi":"10.1109/3DUI.2015.7131750","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131750","url":null,"abstract":"We present a tangible virtual kitchen system for the rehabilitation of Alzheimer's patients. The system utilizes Sifteo cube, a commercially available product containing physical cubes with touch screen, accelerometers, and neighboring sensors. The system supports intuitive and natural user interactions to improve motor skill rehabilitation for everyday kitchen tasks.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132874678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Papadopoulos, H. Choi, J. Sinha, Kiwon Yun, A. Kaufman, D. Samaras, B. Laha
{"title":"Practical chirocentric 3DUI platform for immersive environments","authors":"C. Papadopoulos, H. Choi, J. Sinha, Kiwon Yun, A. Kaufman, D. Samaras, B. Laha","doi":"10.1109/3DUI.2015.7131722","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131722","url":null,"abstract":"Chirocentric 3D user interfaces are sometimes hailed as the “holy grail” of human-computer interaction. However, implementations of these UIs can require cumbersome devices (such as tethered wearable datagloves), be limited in terms of functionality or obscure the algorithms used for hand pose and gesture recognition. These limitations inhibit designing, deploying and formally evaluating such interfaces. To ameliorate this situation, we describe the implementation of a practical chirocentric UI platform, targeted at immersive virtual environments with infrared tracking systems. Our main contributions are two machine learning techniques for the recognition of hand gestures (trajectories of the user's hands over time) and hand poses (configurations of the user's fingers) based on marker clouds and rigid body data. We report on the preliminary use of our system for the implementation of a bimanual 3DUI for a large immersive tiled display. We conclude with plans on using our system as a platform for the design and evaluation of bimanual chirocentric UIs, based on the Framework for Interaction Fidelity Analysis (FIFA).","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134638162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ikeno, Ryuta Okazaki, Taku Hachisu, H. Kajimoto
{"title":"Creating an impression of virtual liquid by modeling Japanese sake bottle vibrations","authors":"S. Ikeno, Ryuta Okazaki, Taku Hachisu, H. Kajimoto","doi":"10.1109/3DUI.2015.7131723","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131723","url":null,"abstract":"It is known that visual, auditory, and tactile modalities affect the experiences of eating and drinking. One such example is the “glug” sound and vibration from a Japanese sake bottle when pouring liquid. Previous studies have modeled the wave of the vibration by summation of two decaying sinusoidal waves with different frequencies; we examined the validity of this model by subjective evaluation. Furthermore, to enrich expression of various types of liquid, we included two new properties of liquid: the viscosity and the residual amount of liquid, both based on recorded data.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127280797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrated view-input interaction method for mobile AR","authors":"T. Tanikawa, H. Uzuka, Takuji Narumi, M. Hirose","doi":"10.1109/3DUI.2015.7131763","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131763","url":null,"abstract":"Recently, mobile AR is very popular and used for many commercial and product promotional activities. However, in almost all mobile AR application, user only view annotated information or preset virtual object's motion in AR environment and cannot interact with virtual objects as if he/she interact real objects in the real environment. In this paper, we propose novel interaction method, called integrated view-input interaction method, which integrate viewpoint moving and virtual objects handling only by handling mobile AR device. Our proposed method has a predilection for popular touch mobile devise, such as smart phone or tablet, does not need any additional sensor for sensing manipulation target. We implemented three integration types and evaluate efficiency in object handling task.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131681606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping 2D input to 3D immersive spatial augmented reality","authors":"M. Marner, Ross T. Smith, B. Thomas","doi":"10.1109/3DUI.2015.7131755","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131755","url":null,"abstract":"This poster presents Viewpoint Cursor, a technique for mapping 2D user input from devices such as mobile phones, trackballs, or computer mice, to 3D multi-projector spatial augmented reality systems. While the ubiquity of input devices such as these make them obvious choices for spatial augmented reality, their 2D nature makes them difficult to use. Existing VR techniques rely on a display in front of the user's eyes on which to place virtual information. Immersive spatial augmented reality systems allow users to experience and interact with projected virtual information from any angle, using arbitrary placement of projectors. Viewpoint Cursor addresses these issues by mapping 2D input to a plane in front of the user's view. Ray casting is then used to find the 3D location for the cursor in the scene, which is then projected using the projection system. The user's position is tracked, with the input remapped accordingly, resulting in 2D input that matches what the user expects, regardless of their location.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125716971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hacking HTML5 canvas to create a stereo 3D renderer","authors":"Diego González-Zúñiga, J. Carrabina","doi":"10.1109/3DUI.2015.7131747","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131747","url":null,"abstract":"In this article, we present a drawing toolkit developed to create stereoscopic side-by-side 3D stimuli. The toolkit is based in the HTML5 canvas element and drawing is achieved using intermediate stereo scripting methods that correspond to the `2d' context of the canvas. We include performance marks of the toolkit running on desktop and mobile browsers. Also, a discussion of how the tool is used for implementing a stereo 3D UI is specified.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115257963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning from rehabilitation: A bi-manual interface approach","authors":"S. Hoermann, J. Collins, H. Regenbrecht","doi":"10.1109/3DUI.2015.7131751","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131751","url":null,"abstract":"Providing portable and affordable virtual reality systems for upper limb stroke rehabilitation is still a challenge. Here we present a simple user interface that allows the integration of various upper limb stroke exercises that can be autonomously performed by patients without the presence of a therapist, yet is portable and assembled using affordable off-the-shelf hardware components. In particular, the system integrates a bi-manual memory game where the user has to engage in meaningful therapeutic reaching exercises. We evaluated the user interface with a wide range of normal subjects and found that participants perceived the system as easy to use, they had no problems with the interaction and had an overall enjoyable experience with the system. This opens up possibilities of combining therapeutic reaching movements with goal-directed tasks to improve motivation and to enhance and increase rehabilitation outcomes for post-stroke patients. Beyond therapeutic use, our approach can also be applied to other 3D user interfaces for bi-manual interaction.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125008043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}