{"title":"Flex: hand gesture recognition using muscle flexing sensors","authors":"C. Eckhardt, John Sullivan, Krzysztof Pietroszek","doi":"10.1145/3131277.3134360","DOIUrl":"https://doi.org/10.1145/3131277.3134360","url":null,"abstract":"We present Flex, a low cost, lightweight, energy-efficient spatial input armband consisting of four flex resistance sensors. The device provides a continuous, 4-dimensional signal of forearm muscles flex. We train a long short-term memory network (LSTM) to enable real-time recognition of motion gestures as read by the sensor.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115271199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ken Pfeuffer, Benedikt Mayer, D. Mardanbegi, Hans-Werner Gellersen
{"title":"Gaze + pinch interaction in virtual reality","authors":"Ken Pfeuffer, Benedikt Mayer, D. Mardanbegi, Hans-Werner Gellersen","doi":"10.1145/3131277.3132180","DOIUrl":"https://doi.org/10.1145/3131277.3132180","url":null,"abstract":"Virtual reality affords experimentation with human abilities beyond what's possible in the real world, toward novel senses of interaction. In many interactions, the eyes naturally point at objects of interest while the hands skilfully manipulate in 3D space. We explore a particular combination for virtual reality, the Gaze + Pinch interaction technique. It integrates eye gaze to select targets, and indirect freehand gestures to manipulate them. This keeps the gesture use intuitive like direct physical manipulation, but the gesture's effect can be applied to any object the user looks at --- whether located near or far. In this paper, we describe novel interaction concepts and an experimental system prototype that bring together interaction technique variants, menu interfaces, and applications into one unified virtual experience. Proof-of-concept application examples were developed and informally tested, such as 3D manipulation, scene navigation, and image zooming, illustrating a range of advanced interaction capabilities on targets at any distance, without relying on extra controller devices.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121318465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Daiber, K. Johnsen, R. Lindeman, S. Subramanian
{"title":"Spatial user interaction panel","authors":"Florian Daiber, K. Johnsen, R. Lindeman, S. Subramanian","doi":"10.1145/3131277.3141410","DOIUrl":"https://doi.org/10.1145/3131277.3141410","url":null,"abstract":"In this panel, we will discuss the current state of Spatial User Interfaces (SUI), and the new research challenges that await us. The discussion will start on the topic of field studies, and practical applications of SUI technologies in the wild. Most current research focuses on controlled settings, therefore exploring how these technologies can be applied outside laboratories will be of particular relevance.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126349027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhuvaneswari Sarupuri, S. Hoermann, Frank Steinicke, R. Lindeman
{"title":"Triggerwalking: a biomechanically-inspired locomotion user interface for efficient realistic virtual walking","authors":"Bhuvaneswari Sarupuri, S. Hoermann, Frank Steinicke, R. Lindeman","doi":"10.1145/3131277.3132177","DOIUrl":"https://doi.org/10.1145/3131277.3132177","url":null,"abstract":"Most current virtual reality (VR) applications use some form of teleportation to cover large distances, or real walking in room-scale setups for moving in virtual environments. Though real walking is the most natural for medium distances, it gets physically demanding and inefficient after prolonged use, while the sudden viewpoint changes experienced with teleportation often lead to disorientation. To close the gap between travel over long and short distances, we introduce TriggerWalking, a biomechanically-inspired locomotion user interface for efficient realistic virtual walking. The idea is to map the human's embodied ability for walking to a finger-based locomotion technique. Using the triggers of common VR controllers, the user can generate near-realistic virtual bipedal steps. We analyzed head oscillations of VR users while they walked with a head-mounted display, and used the data to simulate realistic walking motions with respect to the trigger pulls. We evaluated how the simulation of walking biomechanics affects task performance and spatial cognition. We also compared the usability of TriggerWalking with joystick, teleportation, and walking in place. The results show that users can efficiently use TriggerWalking, while still benefiting from the inherent advantages of real walking.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115045659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. S. Chang, Georgina Yeboah, Alison Doucette, Paul G. Clifton, Michael Nitsche, T. Welsh, Ali Mazalek
{"title":"Evaluating the effect of tangible virtual reality on spatial perspective taking ability","authors":"J. S. Chang, Georgina Yeboah, Alison Doucette, Paul G. Clifton, Michael Nitsche, T. Welsh, Ali Mazalek","doi":"10.1145/3131277.3132171","DOIUrl":"https://doi.org/10.1145/3131277.3132171","url":null,"abstract":"As shown in many large-scale and longitudinal studies, spatial ability is strongly associated with STEM (science, technology, engineering, and mathematics) learning and career success. At the same time, a growing volume of research connects cognitive science theories with tangible/embodied interactions (TEI) and virtual reality (VR) to offer novel means to support spatial cognition. But very few VR-TEI systems are specifically designed to support spatial ability, nor are they evaluated with respect to spatial ability. In this paper, we present the background, approach, and evaluation of TASC (Tangibles for Augmenting Spatial Cognition), a VR-TEI system built to support spatial perspective taking ability. We tested 3 conditions (tangible VR, keyboard/mouse, control; n=46). Analysis of the pre/post-test change in performance on a perspective taking test revealed that only the VR-TEI group showed statistically significant improvements. The results highlight the role of tangible VR design for enhancing spatial cognition.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124836475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CS-DTW: real-time matching of multivariate spatial input against thousands of templates using compute shader DTW","authors":"Krzysztof Pietroszek, Phuc Pham, C. Eckhardt","doi":"10.1145/3131277.3134355","DOIUrl":"https://doi.org/10.1145/3131277.3134355","url":null,"abstract":"We present an open-source implementation of multivariate subsequence Dynamic Time Warping (DTW) on GPU compute shaders (CS-DTW). Our implementation allows for real-time matching of a multivariate spatial input against thousands of pre-recorded templates. We show that, for template matching, CS-DTW is orders of magnitude faster than the state-of-the-art UCR-DTW Suite [2].","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127336845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Mavridou, M. Hamedi, M. Fatoorechi, J. Archer, Andrew Cleal, E. Balaguer-Ballester, E. Seiss, C. Nduka
{"title":"Using facial gestures to drive narrative in VR","authors":"I. Mavridou, M. Hamedi, M. Fatoorechi, J. Archer, Andrew Cleal, E. Balaguer-Ballester, E. Seiss, C. Nduka","doi":"10.1145/3131277.3134366","DOIUrl":"https://doi.org/10.1145/3131277.3134366","url":null,"abstract":"We developed an exploratory VR environment, where spatial features and narratives can be manipulated in real time by the facial and head gestures of the user. We are using the Faceteq prototype, exhibited in 2017, as the interactive interface. Faceteq consists of a wearable technology that can be adjusted on commercial HMDs for measuring facial expressions and biometric responses. Faceteq project was founded with the aim to provide a human-centred additional tool for affective human-computer interaction. The proposed demo will exhibit the hardware and the functionality of the demo in real time.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122987462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Gaze","authors":"","doi":"10.1145/3247922","DOIUrl":"https://doi.org/10.1145/3247922","url":null,"abstract":"","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125138505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Müller, Sebastian Günther, Azita Hosseini Nejad, Niloofar Dezfuli, M. Khalilbeigi, Max Mühlhäuser
{"title":"Cloudbits","authors":"Florian Müller, Sebastian Günther, Azita Hosseini Nejad, Niloofar Dezfuli, M. Khalilbeigi, Max Mühlhäuser","doi":"10.1145/3131277.3132173","DOIUrl":"https://doi.org/10.1145/3131277.3132173","url":null,"abstract":"The retrieval of additional information from public (e.g., map data) or private (e.g., e-mail) information sources using personal smart devices is a common habit in today's co-located conversations. This behavior of users imposes challenges in two main areas: 1) cognitive focus switching and 2) information sharing. In this paper, we explore a novel approach for conversation support through augmented information bits, allowing users to see and access information right in front of their eyes. To that end, we investigate the requirements for the design of a user interface to support conversations through proactive information retrieval in an exploratory study. Based on the results, we 2) present CloudBits: A set of visualization and interaction techniques to provide mutual awareness and enhance coupling in conversations through augmented zero-query search visualization along with its prototype implementation. Finally, we 3) report the findings of a qualitative evaluation and conclude with guidelines for the design of user interfaces for conversation support.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115045695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GestureDrawer: one-handed interaction technique for spatial user-defined imaginary interfaces","authors":"Teo Babic, Harald Reiterer, M. Haller","doi":"10.1145/3131277.3132185","DOIUrl":"https://doi.org/10.1145/3131277.3132185","url":null,"abstract":"Existing empty-handed mid-air interaction techniques for system control are typically limited to a confined gesture set or point-and-select on graphical user interfaces. In this paper, we introduce GestureDrawer, a one-handed interaction with a 3D imaginary interface. Our approach allows users to self-define an imaginary interface, acquire visuospatial memory of the position of its controls in empty space and enables them to select or manipulate those controls by moving their hand in all three dimensions. We evaluate our approach with three user studies and demonstrate that users can indeed position imaginary controls in 3D empty space and select them with an accuracy of 93% without receiving any feedback and without fixed landmarks (e.g. second hand). Further, we show that imaginary interaction is generally faster than mid-air interaction with graphical user interfaces, and that users can retrieve the position of their imaginary controls even after a proprioception disturbance. We condense our findings into several design recommendations and present automotive applications.","PeriodicalId":402574,"journal":{"name":"Proceedings of the 5th Symposium on Spatial User Interaction","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123311438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}