{"title":"AR Lamp: interactions on projection-based augmented reality for interactive learning","authors":"Jeongyun Kim, Jonghoon Seo, T. Han","doi":"10.1145/2557500.2557505","DOIUrl":"https://doi.org/10.1145/2557500.2557505","url":null,"abstract":"Today, people use a computer almost everywhere. At the same time, they still do their work in the old-fashioned way, such as using a pen and paper. A pen is often used in many fields because it is easy to use and familiar. On the other hand, however, it is a quite inconvenient because the information printed on paper is static. If digital features are added to this paper environment, the users can do their work more easily and efficiently. AR (augmented reality) Lamp is a stand-type projector and camera embedded system with the form factor of a desk lamp. Its users can modify the virtually augmented content on top of the paper with seamlessly combined virtual and physical worlds. AR is quite appealing, but it is difficult to popularize due to the lack of interaction. In this paper, the interaction methods that people can use easily and intuitively are focused on. A high-fidelity prototype of the system is presented, and a set of novel interactions is demonstrated. A pilot evaluation of the system is also reported to explore its usage possibility.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122028877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takumi Toyama, Daniel Sonntag, A. Dengel, Takahiro Matsuda, M. Iwamura, K. Kise
{"title":"A mixed reality head-mounted text translation system using eye gaze input","authors":"Takumi Toyama, Daniel Sonntag, A. Dengel, Takahiro Matsuda, M. Iwamura, K. Kise","doi":"10.1145/2557500.2557528","DOIUrl":"https://doi.org/10.1145/2557500.2557528","url":null,"abstract":"Efficient text recognition has recently been a challenge for augmented reality systems. In this paper, we propose a system with the ability to provide translations to the user in real-time. We use eye gaze for more intuitive and efficient input for ubiquitous text reading and translation in head mounted displays (HMDs). The eyes can be used to indicate regions of interest in text documents and activate optical-character-recognition (OCR) and translation functions. Visual feedback and navigation help in the interaction process, and text snippets with translations from Japanese to English text snippets, are presented in a see-through HMD. We focus on travelers who go to Japan and need to read signs and propose two different gaze gestures for activating the OCR text reading and translation function. We evaluate which type of gesture suits our OCR scenario best. We also show that our gaze-based OCR method on the extracted gaze regions provide faster access times to information than traditional OCR approaches. Other benefits include that visual feedback of the extracted text region can be given in real-time, the Japanese to English translation can be presented in real-time, and the augmentation of the synchronized and calibrated HMD in this mixed reality application are presented at exact locations in the augmented user view to allow for dynamic text translation management in head-up display systems.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130110558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Salman Cheema, Sarah Buchanan, Sumit Gulwani, J. Laviola
{"title":"A practical framework for constructing structured drawings","authors":"Salman Cheema, Sarah Buchanan, Sumit Gulwani, J. Laviola","doi":"10.1145/2557500.2557522","DOIUrl":"https://doi.org/10.1145/2557500.2557522","url":null,"abstract":"We describe a novel theoretical framework for modeling structured drawings which contain one or more patterns of repetition in their constituent elements. We then present PatternSketch, a sketch-based drawing tool built using our framework to allow quick construction of structured drawings. PatternSketch can recognize and beautify drawings containing line segments, polylines, arcs, and circles. Users can employ a series of gestures to identify repetitive elements and create new elements based on automatically inferred patterns. PatternSketch leverages the programming-by-example (PBE) paradigm, enabling it to infer non-trivial patterns from a few examples. We show that PatternSketch, with its sketch-based user interface and a unique pattern inference algorithm, enables efficient and natural construction of structured drawings.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng-Zhi Anna Huang, D. Duvenaud, Kenneth C. Arnold, B. Partridge, Josiah Oberholtzer, Krzysztof Z Gajos
{"title":"Active learning of intuitive control knobs for synthesizers using gaussian processes","authors":"Cheng-Zhi Anna Huang, D. Duvenaud, Kenneth C. Arnold, B. Partridge, Josiah Oberholtzer, Krzysztof Z Gajos","doi":"10.1145/2557500.2557544","DOIUrl":"https://doi.org/10.1145/2557500.2557544","url":null,"abstract":"Typical synthesizers only provide controls to the low-level parameters of sound-synthesis, such as wave-shapes or filter envelopes. In contrast, composers often want to adjust and express higher-level qualities, such as how \"scary\" or \"steady\" sounds are perceived to be. We develop a system which allows users to directly control abstract, high-level qualities of sounds. To do this, our system learns functions that map from synthesizer control settings to perceived levels of high-level qualities. Given these functions, our system can generate high-level knobs that directly adjust sounds to have more or less of those qualities. We model the functions mapping from control-parameters to the degree of each high-level quality using Gaussian processes, a nonparametric Bayesian model. These models can adjust to the complexity of the function being learned, account for nonlinear interaction between control-parameters, and allow us to characterize the uncertainty about the functions being learned. By tracking uncertainty about the functions being learned, we can use active learning to quickly calibrate the tool, by querying the user about the sounds the system expects to most improve its performance. We show through simulations that this model-based active learning approach learns high-level knobs on certain classes of target concepts faster than several baselines, and give examples of the resulting automatically- constructed knobs which adjust levels of non-linear, high- level concepts.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130847263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"See what you want to see: visual user-driven approach for hybrid recommendation","authors":"Denis Parra, Peter Brusilovsky, C. Trattner","doi":"10.1145/2557500.2557542","DOIUrl":"https://doi.org/10.1145/2557500.2557542","url":null,"abstract":"Research in recommender systems has traditionally focused on improving the predictive accuracy of recommendations by developing new algorithms or by incorporating new sources of data. However, several studies have shown that accuracy does not always correlate with a better user experience, leading to recent research that puts emphasis on Human-Computer Interaction in order to investigate aspects of the interface and user characteristics that influence the user experience on recommender systems. Following this new research this paper presents SetFusion, a visual user-controllable interface for hybrid recommender system. Our approach enables users to manually fuse and control the importance of recommender strategies and to inspect the fusion results using an interactive Venn diagram visualization. We analyze the results of two field studies in the context of a conference talk recommendation system, performed to investigate the effect of user controllability in a hybrid recommender. Behavioral analysis and subjective evaluation indicate that the proposed controllable interface had a positive effect on the user experience.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134234423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Leveraging social competencies","authors":"Cécile Paris","doi":"10.1145/3260905","DOIUrl":"https://doi.org/10.1145/3260905","url":null,"abstract":"","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127609452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Learning and skills","authors":"Shimei Pan","doi":"10.1145/3260902","DOIUrl":"https://doi.org/10.1145/3260902","url":null,"abstract":"","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115610278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Buschek, Oliver Schoenleben, Antti Oulasvirta
{"title":"Improving accuracy in back-of-device multitouch typing: a clustering-based approach to keyboard updating","authors":"Daniel Buschek, Oliver Schoenleben, Antti Oulasvirta","doi":"10.1145/2557500.2557501","DOIUrl":"https://doi.org/10.1145/2557500.2557501","url":null,"abstract":"Recent work has shown that a multitouch sensor attached to the back of a handheld device can allow rapid typing engaging all ten fingers. However, high error rates remain a problem, because the user can not see or feel key-targets on the back. We propose a machine learning approach that can significantly improve accuracy. The method considers hand anatomy and movement ranges of fingers. The key insight is a combination of keyboard and hand models in a hierarchical clustering method. This enables dynamic re-estimation of key-locations while typing to account for changes in hand postures and movement ranges of fingers. We also show that accuracy can be further improved with language models. Results from a user study show improvements of over 40% compared to the previously deployed \"naive\" approach. We examine entropy as a touch precision metric with respect to typing experience. We also find that the QWERTY layout is not ideal. Finally, we conclude with ideas for further improvements.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116227884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using augmented reality to create empathic experiences","authors":"M. Billinghurst","doi":"10.1145/2557500.2568057","DOIUrl":"https://doi.org/10.1145/2557500.2568057","url":null,"abstract":"Intelligent user interfaces have traditionally been used to create systems that respond intelligently to user input. However there is a recent trend towards Empathic Interfaces that are designed to go beyond understanding user input and to recognize emotional state and user feelings. In this presentation we explore how Augmented Reality (AR) can be used to convey that emotional state and so allow users to capture and share emotional experiences. In this way AR not only overlays virtual imagery on the real world, but also can create deeper understanding of user's experience at particular locations and points in time. The recent emergence of truly wearable systems, such as Google Glass, provide a platform for Empathic Communication using AR. Examples will be shown from research conducted at the HIT Lab NZ and other research organizations, and key areas for future research described.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128882448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tagging-by-search: automatic image region labeling using gaze information obtained from image search","authors":"T. Walber, Chantal Neuhaus, A. Scherp","doi":"10.1145/2557500.2557517","DOIUrl":"https://doi.org/10.1145/2557500.2557517","url":null,"abstract":"Labeled image regions provide very valuable information that can be used in different settings such as image search. The manual creation of region labels is a tedious task. Fully automatic approaches lack understanding the image content sufficiently due to the huge variety of depicted objects. Our approach benefits from the expected spread of eye tracking hardware and uses gaze information obtained from users performing image search tasks to automatically label image regions. This allows to exploit the human capabilities regarding the visual perception of image content while performing daily routine tasks. In an experiment with 23 participants, we show that it is possible to assign search terms to photo regions by means of gaze analysis with an average precision of 0.56 and an average F-measure of 0.38 over 361 photos. The participants performed different search tasks while their gaze was recorded. The results of the experiment show that the gaze-based approach performs significantly better than a baseline approach based on saliency maps.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124445320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}