Proceedings of the 19th international conference on Intelligent User Interfaces最新文献

筛选
英文 中文
AR Lamp: interactions on projection-based augmented reality for interactive learning AR灯:基于投影的增强现实互动学习
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557505
Jeongyun Kim, Jonghoon Seo, T. Han
{"title":"AR Lamp: interactions on projection-based augmented reality for interactive learning","authors":"Jeongyun Kim, Jonghoon Seo, T. Han","doi":"10.1145/2557500.2557505","DOIUrl":"https://doi.org/10.1145/2557500.2557505","url":null,"abstract":"Today, people use a computer almost everywhere. At the same time, they still do their work in the old-fashioned way, such as using a pen and paper. A pen is often used in many fields because it is easy to use and familiar. On the other hand, however, it is a quite inconvenient because the information printed on paper is static. If digital features are added to this paper environment, the users can do their work more easily and efficiently. AR (augmented reality) Lamp is a stand-type projector and camera embedded system with the form factor of a desk lamp. Its users can modify the virtually augmented content on top of the paper with seamlessly combined virtual and physical worlds. AR is quite appealing, but it is difficult to popularize due to the lack of interaction. In this paper, the interaction methods that people can use easily and intuitively are focused on. A high-fidelity prototype of the system is presented, and a set of novel interactions is demonstrated. A pilot evaluation of the system is also reported to explore its usage possibility.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122028877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A mixed reality head-mounted text translation system using eye gaze input 使用眼睛注视输入的混合现实头戴式文本翻译系统
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557528
Takumi Toyama, Daniel Sonntag, A. Dengel, Takahiro Matsuda, M. Iwamura, K. Kise
{"title":"A mixed reality head-mounted text translation system using eye gaze input","authors":"Takumi Toyama, Daniel Sonntag, A. Dengel, Takahiro Matsuda, M. Iwamura, K. Kise","doi":"10.1145/2557500.2557528","DOIUrl":"https://doi.org/10.1145/2557500.2557528","url":null,"abstract":"Efficient text recognition has recently been a challenge for augmented reality systems. In this paper, we propose a system with the ability to provide translations to the user in real-time. We use eye gaze for more intuitive and efficient input for ubiquitous text reading and translation in head mounted displays (HMDs). The eyes can be used to indicate regions of interest in text documents and activate optical-character-recognition (OCR) and translation functions. Visual feedback and navigation help in the interaction process, and text snippets with translations from Japanese to English text snippets, are presented in a see-through HMD. We focus on travelers who go to Japan and need to read signs and propose two different gaze gestures for activating the OCR text reading and translation function. We evaluate which type of gesture suits our OCR scenario best. We also show that our gaze-based OCR method on the extracted gaze regions provide faster access times to information than traditional OCR approaches. Other benefits include that visual feedback of the extracted text region can be given in real-time, the Japanese to English translation can be presented in real-time, and the augmentation of the synchronized and calibrated HMD in this mixed reality application are presented at exact locations in the augmented user view to allow for dynamic text translation management in head-up display systems.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130110558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
A practical framework for constructing structured drawings 构造结构图的实用框架
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557522
Salman Cheema, Sarah Buchanan, Sumit Gulwani, J. Laviola
{"title":"A practical framework for constructing structured drawings","authors":"Salman Cheema, Sarah Buchanan, Sumit Gulwani, J. Laviola","doi":"10.1145/2557500.2557522","DOIUrl":"https://doi.org/10.1145/2557500.2557522","url":null,"abstract":"We describe a novel theoretical framework for modeling structured drawings which contain one or more patterns of repetition in their constituent elements. We then present PatternSketch, a sketch-based drawing tool built using our framework to allow quick construction of structured drawings. PatternSketch can recognize and beautify drawings containing line segments, polylines, arcs, and circles. Users can employ a series of gestures to identify repetitive elements and create new elements based on automatically inferred patterns. PatternSketch leverages the programming-by-example (PBE) paradigm, enabling it to infer non-trivial patterns from a few examples. We show that PatternSketch, with its sketch-based user interface and a unique pattern inference algorithm, enables efficient and natural construction of structured drawings.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Active learning of intuitive control knobs for synthesizers using gaussian processes 主动学习直观的控制旋钮合成器使用高斯过程
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557544
Cheng-Zhi Anna Huang, D. Duvenaud, Kenneth C. Arnold, B. Partridge, Josiah Oberholtzer, Krzysztof Z Gajos
{"title":"Active learning of intuitive control knobs for synthesizers using gaussian processes","authors":"Cheng-Zhi Anna Huang, D. Duvenaud, Kenneth C. Arnold, B. Partridge, Josiah Oberholtzer, Krzysztof Z Gajos","doi":"10.1145/2557500.2557544","DOIUrl":"https://doi.org/10.1145/2557500.2557544","url":null,"abstract":"Typical synthesizers only provide controls to the low-level parameters of sound-synthesis, such as wave-shapes or filter envelopes. In contrast, composers often want to adjust and express higher-level qualities, such as how \"scary\" or \"steady\" sounds are perceived to be. We develop a system which allows users to directly control abstract, high-level qualities of sounds. To do this, our system learns functions that map from synthesizer control settings to perceived levels of high-level qualities. Given these functions, our system can generate high-level knobs that directly adjust sounds to have more or less of those qualities. We model the functions mapping from control-parameters to the degree of each high-level quality using Gaussian processes, a nonparametric Bayesian model. These models can adjust to the complexity of the function being learned, account for nonlinear interaction between control-parameters, and allow us to characterize the uncertainty about the functions being learned. By tracking uncertainty about the functions being learned, we can use active learning to quickly calibrate the tool, by querying the user about the sounds the system expects to most improve its performance. We show through simulations that this model-based active learning approach learns high-level knobs on certain classes of target concepts faster than several baselines, and give examples of the resulting automatically- constructed knobs which adjust levels of non-linear, high- level concepts.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130847263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
See what you want to see: visual user-driven approach for hybrid recommendation 看到你想看到的:混合推荐的可视化用户驱动方法
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557542
Denis Parra, Peter Brusilovsky, C. Trattner
{"title":"See what you want to see: visual user-driven approach for hybrid recommendation","authors":"Denis Parra, Peter Brusilovsky, C. Trattner","doi":"10.1145/2557500.2557542","DOIUrl":"https://doi.org/10.1145/2557500.2557542","url":null,"abstract":"Research in recommender systems has traditionally focused on improving the predictive accuracy of recommendations by developing new algorithms or by incorporating new sources of data. However, several studies have shown that accuracy does not always correlate with a better user experience, leading to recent research that puts emphasis on Human-Computer Interaction in order to investigate aspects of the interface and user characteristics that influence the user experience on recommender systems. Following this new research this paper presents SetFusion, a visual user-controllable interface for hybrid recommender system. Our approach enables users to manually fuse and control the importance of recommender strategies and to inspect the fusion results using an interactive Venn diagram visualization. We analyze the results of two field studies in the context of a conference talk recommendation system, performed to investigate the effect of user controllability in a hybrid recommender. Behavioral analysis and subjective evaluation indicate that the proposed controllable interface had a positive effect on the user experience.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134234423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
Session details: Leveraging social competencies 会议细节:利用社交能力
Cécile Paris
{"title":"Session details: Leveraging social competencies","authors":"Cécile Paris","doi":"10.1145/3260905","DOIUrl":"https://doi.org/10.1145/3260905","url":null,"abstract":"","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127609452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Learning and skills 会议细节:学习和技能
Shimei Pan
{"title":"Session details: Learning and skills","authors":"Shimei Pan","doi":"10.1145/3260902","DOIUrl":"https://doi.org/10.1145/3260902","url":null,"abstract":"","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115610278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving accuracy in back-of-device multitouch typing: a clustering-based approach to keyboard updating 提高设备后端多点触控打字的准确性:基于聚类的键盘更新方法
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557501
Daniel Buschek, Oliver Schoenleben, Antti Oulasvirta
{"title":"Improving accuracy in back-of-device multitouch typing: a clustering-based approach to keyboard updating","authors":"Daniel Buschek, Oliver Schoenleben, Antti Oulasvirta","doi":"10.1145/2557500.2557501","DOIUrl":"https://doi.org/10.1145/2557500.2557501","url":null,"abstract":"Recent work has shown that a multitouch sensor attached to the back of a handheld device can allow rapid typing engaging all ten fingers. However, high error rates remain a problem, because the user can not see or feel key-targets on the back. We propose a machine learning approach that can significantly improve accuracy. The method considers hand anatomy and movement ranges of fingers. The key insight is a combination of keyboard and hand models in a hierarchical clustering method. This enables dynamic re-estimation of key-locations while typing to account for changes in hand postures and movement ranges of fingers. We also show that accuracy can be further improved with language models. Results from a user study show improvements of over 40% compared to the previously deployed \"naive\" approach. We examine entropy as a touch precision metric with respect to typing experience. We also find that the QWERTY layout is not ideal. Finally, we conclude with ideas for further improvements.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116227884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Using augmented reality to create empathic experiences 使用增强现实来创造移情体验
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2568057
M. Billinghurst
{"title":"Using augmented reality to create empathic experiences","authors":"M. Billinghurst","doi":"10.1145/2557500.2568057","DOIUrl":"https://doi.org/10.1145/2557500.2568057","url":null,"abstract":"Intelligent user interfaces have traditionally been used to create systems that respond intelligently to user input. However there is a recent trend towards Empathic Interfaces that are designed to go beyond understanding user input and to recognize emotional state and user feelings. In this presentation we explore how Augmented Reality (AR) can be used to convey that emotional state and so allow users to capture and share emotional experiences. In this way AR not only overlays virtual imagery on the real world, but also can create deeper understanding of user's experience at particular locations and points in time. The recent emergence of truly wearable systems, such as Google Glass, provide a platform for Empathic Communication using AR. Examples will be shown from research conducted at the HIT Lab NZ and other research organizations, and key areas for future research described.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128882448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Tagging-by-search: automatic image region labeling using gaze information obtained from image search 搜索标记:利用从图像搜索中获得的凝视信息自动标记图像区域
Proceedings of the 19th international conference on Intelligent User Interfaces Pub Date : 2014-02-24 DOI: 10.1145/2557500.2557517
T. Walber, Chantal Neuhaus, A. Scherp
{"title":"Tagging-by-search: automatic image region labeling using gaze information obtained from image search","authors":"T. Walber, Chantal Neuhaus, A. Scherp","doi":"10.1145/2557500.2557517","DOIUrl":"https://doi.org/10.1145/2557500.2557517","url":null,"abstract":"Labeled image regions provide very valuable information that can be used in different settings such as image search. The manual creation of region labels is a tedious task. Fully automatic approaches lack understanding the image content sufficiently due to the huge variety of depicted objects. Our approach benefits from the expected spread of eye tracking hardware and uses gaze information obtained from users performing image search tasks to automatically label image regions. This allows to exploit the human capabilities regarding the visual perception of image content while performing daily routine tasks. In an experiment with 23 participants, we show that it is possible to assign search terms to photo regions by means of gaze analysis with an average precision of 0.56 and an average F-measure of 0.38 over 361 photos. The participants performed different search tasks while their gaze was recorded. The results of the experiment show that the gaze-based approach performs significantly better than a baseline approach based on saliency maps.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124445320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信