{"title":"Session details: Session 7: Applications","authors":"Jo Vermeulen","doi":"10.1145/3257141","DOIUrl":"https://doi.org/10.1145/3257141","url":null,"abstract":"","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120880290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Newn, Eduardo Velloso, M. Carter, F. Vetere
{"title":"Multimodal Segmentation on a Large Interactive Tabletop: Extending Interaction on Horizontal Surfaces with Gaze","authors":"Joshua Newn, Eduardo Velloso, M. Carter, F. Vetere","doi":"10.1145/2992154.2992179","DOIUrl":"https://doi.org/10.1145/2992154.2992179","url":null,"abstract":"Eye tracking is a promising input modality for interactive tabletops. However, issues such as eyelid occlusion and the viewing angle at distant positions present significant challenges for remote gaze tracking in this setting. We present the results of two studies that explore the way gaze interaction can be enabled. Our first study contributes the results from an empirical investigation of gaze accuracy on a large horizontal surface, finding gaze to be unusable close to the user (due to eyelid occlusion), accurate at arm's length, and only precise horizontally at large distances. In consideration of these results, we propose two solutions for the design of interactive systems that utilise remote gaze-tracking on the tabletop; multimodal segmentation and the use of X-Gaze-our novel technique-to interact with out-of-reach objects. Our second study evaluates and validates both these solutions in a Video-on-Demand application, presenting immediate opportunities for remote-gaze interaction on horizontal surfaces.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121142119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RootCap: Touch Detection on Multi-electrodes using Single-line Connected Capacitive Sensing","authors":"M. Tsuruta, Shuta Nakamae, B. Shizuki","doi":"10.1145/2992154.2992180","DOIUrl":"https://doi.org/10.1145/2992154.2992180","url":null,"abstract":"In designing interactive products, it is important for designers to test the product's usability by manufacturing its shape and interface iteratively through rapid prototyping. The goal of our research is to provide the designers with an additional touch sensing method for rapid prototyping interactive products with flat, curved, or flexible surface. In this paper, we present RootCap, a capacitive touch sensing method that can detect a touch on a multi-electrode input surface while maintaining the characteristics of a single-line connection. The key concept behind realizing this goal is the imposition of unique capacitance on each electrode (including the capacitor connected to the touch electrode) branching from the single-line connection. Moreover, we developed a technique for creating a capacitor by printing silver nanoparticle ink on both sides of a sheet of paper, supporting designers in the creation of a multi-electrode input surface, on which each electrode has a unique capacitance.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125407848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jens Grubert, E. Ofek, M. Pahud, M. Kranz, D. Schmalstieg
{"title":"GlassHands: Interaction Around Unmodified Mobile Devices Using Sunglasses","authors":"Jens Grubert, E. Ofek, M. Pahud, M. Kranz, D. Schmalstieg","doi":"10.1145/2992154.2992162","DOIUrl":"https://doi.org/10.1145/2992154.2992162","url":null,"abstract":"We present a novel approach for extending the input space around unmodified mobile devices. Using built-in front-facing cameras of unmodified handheld devices, GlassHands estimates hand poses and gestures through reflections in sunglasses, ski goggles or visors. Thereby, GlassHands creates an enlarged input space, rivaling input reach on large touch displays. We introduce the idea along with its technical concept and implementation. We demonstrate the feasibility and potential of our proposed approach in several application scenarios, such as map browsing or drawing using a set of interaction techniques previously possible only with modified mobile devices or on large touch displays. Our research is backed up with a user study.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126630031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Movement, Material, Mind: Tangible and Embodied Interactions for Discovery and Learning","authors":"Ali Mazalek","doi":"10.1145/2992154.3003474","DOIUrl":"https://doi.org/10.1145/2992154.3003474","url":null,"abstract":"We are increasingly tethered to pixelated boxes of varying shapes and sizes. These devices are ever present in our lives, transporting us daily into vast information and computational realms. And while our interactions with digital devices are arguably becoming more fluid and \"natural\", they still make only limited use of our motor system and largely isolate us from our immediate physical surroundings. Yet a gradual shift in the cognitive sciences toward embodied paradigms of human cognition can inspire us to think about why and how computational media should engage our bodies and minds together. What is the role of physical movements and materials in the way we engage with and construct knowledge in the world? This talk will provide some perspectives on this question, highlighting research from the Synaesthetic Media Lab [1] that supports creativity, discovery and learning across the physical and digital worlds.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126960387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erroll Wood, Jonathan Taylor, J. Fogarty, A. Fitzgibbon, J. Shotton
{"title":"ShadowHands: High-Fidelity Remote Hand Gesture Visualization using a Hand Tracker","authors":"Erroll Wood, Jonathan Taylor, J. Fogarty, A. Fitzgibbon, J. Shotton","doi":"10.1145/2992154.2992169","DOIUrl":"https://doi.org/10.1145/2992154.2992169","url":null,"abstract":"This paper presents ShadowHands - a novel technique for visualizing a remote user's hand gestures using a single depth sensor and hand tracking system. Previous work has shown that making distributed users better aware of each other's gestures facilitates remote collaboration. These systems presented virtual embodiments as a stream of raw 2D or 3D data -- this data is noisy, and requires high bandwidth and favorable camera positions. Instead, our work uses a hand tracker to capture gestures which we visualize with a high-fidelity hand model. Our system is practical, requiring only a single depth sensor placed below the screen, and can be used without per-user calibration. As we use a 3D model rather than raw data, we can augment the hand's appearance to improve saliency and aesthetics. We alpha-blend this visualization over a shared workspace, so the local user perceives the remote user's hand as if they were separated by a transparent display. We conducted an experiment to compare traditional hand embodiments with our new technique, showing a quantitative improvement in selection accuracy and qualitative improvements in feelings of mutual understanding and engagement.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126997478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Marrinan, Arthur Nishimoto, J. Insley, S. Rizzi, Andrew E. Johnson, M. Papka
{"title":"Interactive Multi-Modal Display Spaces for Visual Analysis","authors":"T. Marrinan, Arthur Nishimoto, J. Insley, S. Rizzi, Andrew E. Johnson, M. Papka","doi":"10.1145/2992154.2996792","DOIUrl":"https://doi.org/10.1145/2992154.2996792","url":null,"abstract":"Classic visual analysis relies on a single medium for displaying and interacting with data. Large-scale tiled display walls, virtual reality using head-mounted displays or CAVE systems, and collaborative touch screens have all been utilized for data exploration and analysis. We present our initial findings of combining numerous display environments and input modalities to create an interactive multi-modal display space that enables researchers to leverage various pieces of technology that will best suit specific sub-tasks. Our main contributions are 1) the deployment of an input server that interfaces with a wide array of interaction devices to create a single uniform stream of data usable by custom visual applications, and 2) three real-world use cases of leveraging multiple display environments in conjunction with one another to enhance scientific discovery and data dissemination.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116303901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DIRECT: Making Touch Tracking on Ordinary Surfaces Practical with Hybrid Depth-Infrared Sensing","authors":"R. Xiao, S. Hudson, Chris Harrison","doi":"10.1145/2992154.2992173","DOIUrl":"https://doi.org/10.1145/2992154.2992173","url":null,"abstract":"Several generations of inexpensive depth cameras have opened the possibility for new kinds of interaction on everyday surfaces. A number of research systems have demonstrated that depth cameras, combined with projectors for output, can turn nearly any reasonably flat surface into a touch-sensitive display. However, even with the latest generation of depth cameras, it has been difficult to obtain sufficient sensing fidelity across a table-sized surface to get much beyond a proof-of-concept demonstration. In this paper we present DIRECT, a novel touch-tracking algorithm that merges depth and infrared imagery captured by a commodity sensor. This yields significantly better touch tracking than from depth data alone, as well as any prior system. Further extending prior work, DIRECT supports arbitrary user orientation and requires no prior calibration or background capture. We describe the implementation of our system and quantify its accuracy through a comparison study of previously published, depth-based touch-tracking algorithms. Results show that our technique boosts touch detection accuracy by 15% and reduces positional error by 55% compared to the next best-performing technique.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113979585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederik Brudy, Steven Houben, Nicolai Marquardt, Y. Rogers
{"title":"CurationSpace: Cross-Device Content Curation Using Instrumental Interaction","authors":"Frederik Brudy, Steven Houben, Nicolai Marquardt, Y. Rogers","doi":"10.1145/2992154.2992175","DOIUrl":"https://doi.org/10.1145/2992154.2992175","url":null,"abstract":"For digital content curation of historical artefacts, curators collaboratively collect, analyze and edit documents, images, and other digital resources in order to display and share new representations of that information to an audience. Despite their increasing reliance on digital documents and tools, current technologies provide little support for these specific collaborative content curation activities. We introduce CurationSpace -- a novel cross-device system to provide more expressive tools for curating and composing digital historical artefacts. Based on the concept of Instrumental Interaction, CurationSpace allows users to interact with digital curation artefacts on shared interactive surfaces using personal smartwatches as selectors for instruments or modifiers (applied to either the whole curation space, individual documents, or fragments). We introduce a range of novel interaction techniques that allow individuals or groups of curators to more easily create, navigate and share resources during content curation. We report insights from our user study about people's use of instruments and modifiers for curation activities.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124304216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Touch Detection System for Various Surfaces Using Shadow of Finger","authors":"T. Niikura, T. Matsubara, Naoki Mori","doi":"10.1145/2992154.2996777","DOIUrl":"https://doi.org/10.1145/2992154.2996777","url":null,"abstract":"In this paper we try to realize a system that enables users to interact with surrounding surfaces using touch interactions. For this purpose, we propose new touch detection technique which utilizes the shadows of a finger, and developed a prototype system with an infrared (IR) camera and two IR lights. Since the shapes of a finger's shadows vary drastically depending on the distance between the surface and finger, our prototype system can detect touch. To improve the accuracy of the estimated touch position, we also introduce multiple regression analysis into the estimation algorithm of the touch position. We conducted two experiments on the accuracy of estimated touch position, and the results shows that the target accuracy was within an error of less than 5 mm.","PeriodicalId":189872,"journal":{"name":"Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125780873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}