{"title":"Session details: Panel","authors":"Aitor Rovira","doi":"10.1145/3248577","DOIUrl":"https://doi.org/10.1145/3248577","url":null,"abstract":"","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130000217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"KnowHow: Contextual Audio-Assistance for the Visually Impaired in Performing Everyday Tasks","authors":"A. Agarwal, Sujeath Pareddy, Swaminathan Manohar","doi":"10.1145/2983310.2989196","DOIUrl":"https://doi.org/10.1145/2983310.2989196","url":null,"abstract":"We present a device for visually impaired persons (VIPs) that delivers contextual audio assistance for physical objects and tasks. In initial observations, we found ubiquitous use of audio-assistance technologies by VIPs for interacting with computing devices, such as Android TalkBack. However, we also saw that devices without screens frequently lack accessibility features. Our solution allows a VIP to obtain audio assistance in the presence of an arbitrary physical interface or object through a chest-mounted device. On-board are camera sensors that point towards the user's personal front-facing grasping region. Upon detecting certain gestures such as picking up an object, the device provides helpful contextual audio information to the user. Textual interfaces can be read aloud by sliding a finger over the surface of the object, allowing the user to hear a document or receive audio guidance for non-assistively-enabled electronic devices. The user may provide questions verbally in order to refine their audio assistance, or to ask broad questions about their environment. Our motivation is to provide sensemaking faculties that creatively approximate those of non-VIPs in tasks that make VIPs ineligible for common employment opportunities.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125328175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengzhe Liu, Fuyang Huang, G. Tang, F. Sze, J. Qin, Xiaogang Wang, Qiang Xu
{"title":"Real-time Sign Language Recognition with Guided Deep Convolutional Neural Networks","authors":"Zhengzhe Liu, Fuyang Huang, G. Tang, F. Sze, J. Qin, Xiaogang Wang, Qiang Xu","doi":"10.1145/2983310.2989187","DOIUrl":"https://doi.org/10.1145/2983310.2989187","url":null,"abstract":"We develop a real-time, robust and accurate sign language recognition system leveraging deep convolutional neural networks(DCNN). Our framework is able to prevent common problems such as error accumulation of existing frameworks and it outperforms state-of-the-art frameworks in terms of accuracy, recognition time and usability.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129939384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thibaut Jacob, G. Bailly, É. Lecolinet, Géry Casiez, M. Teyssier
{"title":"Desktop Orbital Camera Motions Using Rotational Head Movements","authors":"Thibaut Jacob, G. Bailly, É. Lecolinet, Géry Casiez, M. Teyssier","doi":"10.1145/2983310.2985758","DOIUrl":"https://doi.org/10.1145/2983310.2985758","url":null,"abstract":"In this paper, we investigate how head movements can serve to change the viewpoint in 3D applications, especially when the viewpoint needs to be changed quickly and temporarily to disambiguate the view. We study how to use yaw and roll head movements to perform orbital camera control, i.e., to rotate the camera around a specific point in the scene. We report on four user studies. Study 1 evaluates the useful resolution of head movements. Study 2 informs about visual and physical comfort. Study 3 compares two interaction techniques, designed by taking into account the results of the two previous studies. Results show that head roll is more efficient than head yaw for orbital camera control when interacting with a screen. Finally, Study 4 compares head roll with a standard technique relying on the mouse and the keyboard. Moreover, users were allowed to use both techniques at their convenience in a second stage. Results show that users prefer and are faster (14.5%) with the head control technique.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124489647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oscar Ariza, J. Freiwald, Nadine Laage, M. Feist, Mariam Salloum, G. Bruder, Frank Steinicke
{"title":"Inducing Body-Transfer Illusions in VR by Providing Brief Phases of Visual-Tactile Stimulation","authors":"Oscar Ariza, J. Freiwald, Nadine Laage, M. Feist, Mariam Salloum, G. Bruder, Frank Steinicke","doi":"10.1145/2983310.2985760","DOIUrl":"https://doi.org/10.1145/2983310.2985760","url":null,"abstract":"Current developments in the area of virtual reality (VR) allow numerous users to experience immersive virtual environments (VEs) in a broad range of application fields. In the same way, some research has shown novel advances in wearable devices to provide vibrotactile feedback which can be combined with low-cost technology for hand tracking and gestures recognition. The combination of these technologies can be used to investigate interesting psychological illusions. For instance, body-transfer illusions, such as the rubber-hand illusion or elongated-arm illusion, have shown that it is possible to give a person the persistent illusion of body transfer after only brief phases of synchronized visual-haptic stimulation. The motivation of this paper is to induce such perceptual illusions by combining VR, vibrotactile and tracking technologies, offering an interesting way to create new spatial interaction experiences centered on the senses of sight and touch. We present a technology framework that includes a pair of self-made gloves featuring vibrotactile feedback that can be synchronized with audio-visual stimulation in order to reproduce body-transfer illusions in VR. We present in detail the implementation of the framework and show that the proposed technology setup is able to induce the elongated-arm illusion providing automatic tactile stimuli, instead of the traditional approach based on manually synchronized stimulation.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121088527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Lubos, G. Bruder, Oscar Ariza, Frank Steinicke
{"title":"Touching the Sphere: Leveraging Joint-Centered Kinespheres for Spatial User Interaction","authors":"Paul Lubos, G. Bruder, Oscar Ariza, Frank Steinicke","doi":"10.1145/2983310.2985753","DOIUrl":"https://doi.org/10.1145/2983310.2985753","url":null,"abstract":"Designing spatial user interfaces for virtual reality (VR) applications that are intuitive, comfortable and easy to use while at the same time providing high task performance is a challenging task. This challenge is even harder to solve since perception and action in immersive virtual environments differ significantly from the real world, causing natural user interfaces to elicit a dissociation of perceptual and motor space as well as levels of discomfort and fatigue unknown in the real world. In this paper, we present and evaluate the novel method to leverage joint-centered kinespheres for interactive spatial applications. We introduce kinespheres within arm's reach that envelope the reachable space for each joint such as shoulder, elbow or wrist, thus defining 3D interactive volumes with the boundaries given by 2D manifolds. We present a Fitts' Law experiment in which we evaluated the spatial touch performance on the inside and on the boundary of the main joint-centered kinespheres. Moreover, we present a confirmatory experiment in which we compared joint-centered interaction with traditional spatial head-centered menus. Finally, we discuss the advantages and limitations of placing interactive graphical elements relative to joint positions and, in particular, on the boundaries of kinespheres.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Haptic Exploration of Remote Environments with Gesture-based Collaborative Guidance","authors":"Seokyeol Kim, Jinah Park","doi":"10.1145/2983310.2989201","DOIUrl":"https://doi.org/10.1145/2983310.2989201","url":null,"abstract":"We present a collaborative haptic interaction method for exploring a remote physical environment with guidance from a distant helper. Spatial information, which is represented by a point cloud, of the remote environment is directly rendered as a contact force without reconstruction of surfaces. On top of this, the helper can selectively exert an attractive force for reaching a target or a repulsive force for avoiding a forbidden region to the user by using free-hand gestures.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Kharlamov, Krzysztof Pietroszek, Liudmila Tahai
{"title":"TickTockRay Demo: Smartwatch Raycasting for Mobile HMDs","authors":"D. Kharlamov, Krzysztof Pietroszek, Liudmila Tahai","doi":"10.1145/2983310.2989206","DOIUrl":"https://doi.org/10.1145/2983310.2989206","url":null,"abstract":"We demonstrate TickTockRay, an implementation of fixed-origin raycasting technique that utilizes a smartwatch as an input device. We show that a smartwatch-based raycasting is a good alternative to a head-rotation-controlled cursor or a specialized input device. TickTockRay implements fixed-origin raycasting with the ray originating from a fixed point, located, roughly, in the user's chest. The control-display (C/D) ratio of TickTockRay technique is set to 1, with exact correspondence between the ray and the smartwatch's rotation. Such C/D ratio enables a user to select targets in the entire virtual reality control space.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126096889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Moving Ahead with Peephole Pointing: Modelling Object Selection with Head-Worn Display Field of View Limitations","authors":"Barrett Ens, David Ahlström, Pourang Irani","doi":"10.1145/2983310.2985756","DOIUrl":"https://doi.org/10.1145/2983310.2985756","url":null,"abstract":"Head-worn displays (HWDs) are now becoming widely available, which will allow researchers to explore sophisticated interface designs that support rich user productivity features. In a large virtual workspace, the limited available field of view (FoV) may cause objects to be located outside of the available viewing area, requiring users to first locate an item using head motion before making a selection. However, FoV varies widely across different devices, with an unknown impact on interface usability. We present a user study to test two-step selection models previously proposed for \"peephole pointing\" in large virtual workspaces on mobile devices. Using a CAVE environment to simulate the FoV restriction of stereoscopic HWDs, we compare two different input methods, direct pointing, and raycasting in a selection task with varying FoV width. We find a very strong fit in this context, comparable to the prediction accuracy in the original studies, and much more accurate than the traditional Fitts' law model. We detect an advantage of direct pointing over raycasting, particularly with small targets. Moreover, we find that this advantage of direct pointing diminishes with decreasing FoV.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"444 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125765296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Arm-Hidden Private Area on an Interactive Tabletop System","authors":"Kai Li, Asako Kimura, F. Shibata","doi":"10.1145/2983310.2989194","DOIUrl":"https://doi.org/10.1145/2983310.2989194","url":null,"abstract":"Tabletop systems are used primarily in meetings or other activities wherein information is shared. However, when confidential input is needed, for example when entering a password, privacy becomes an issue. In this study, we use the shadowed area nearby the forearm when the user places their forearm on the tabletop. And our tabletop security system is using that hidden-area to show a confidential information window. We also introduce several potential applications for this hidden-area system.","PeriodicalId":185819,"journal":{"name":"Proceedings of the 2016 Symposium on Spatial User Interaction","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133817762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}