IUI. International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
Optimizing temporal topic segmentation for intelligent text visualization 面向智能文本可视化的时间主题分割优化
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449441
Shimei Pan, Michelle X. Zhou, Yangqiu Song, Weihong Qian, Fei Wang, Shixia Liu
{"title":"Optimizing temporal topic segmentation for intelligent text visualization","authors":"Shimei Pan, Michelle X. Zhou, Yangqiu Song, Weihong Qian, Fei Wang, Shixia Liu","doi":"10.1145/2449396.2449441","DOIUrl":"https://doi.org/10.1145/2449396.2449441","url":null,"abstract":"We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76973054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Directing exploratory search: reinforcement learning from user interactions with keywords 指导探索性搜索:从用户与关键字的交互中强化学习
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449413
D. Glowacka, Tuukka Ruotsalo, Ksenia Konyushkova, Kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci
{"title":"Directing exploratory search: reinforcement learning from user interactions with keywords","authors":"D. Glowacka, Tuukka Ruotsalo, Ksenia Konyushkova, Kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci","doi":"10.1145/2449396.2449413","DOIUrl":"https://doi.org/10.1145/2449396.2449413","url":null,"abstract":"Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78334273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
SmartDCap: semi-automatic capture of higher quality document images from a smartphone SmartDCap:从智能手机半自动捕获更高质量的文档图像
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449433
Francine Chen, S. Carter, Laurent Denoue, J. Kumar
{"title":"SmartDCap: semi-automatic capture of higher quality document images from a smartphone","authors":"Francine Chen, S. Carter, Laurent Denoue, J. Kumar","doi":"10.1145/2449396.2449433","DOIUrl":"https://doi.org/10.1145/2449396.2449433","url":null,"abstract":"People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74655565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
LinkedVis: exploring social and semantic career recommendations LinkedVis:探索社交和语义职业推荐
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449412
Svetlin Bostandjiev, J. O'Donovan, Tobias Höllerer
{"title":"LinkedVis: exploring social and semantic career recommendations","authors":"Svetlin Bostandjiev, J. O'Donovan, Tobias Höllerer","doi":"10.1145/2449396.2449412","DOIUrl":"https://doi.org/10.1145/2449396.2449412","url":null,"abstract":"This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of \"what-if\" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83629073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities 用户自适应信息可视化:利用眼睛注视数据推断可视化任务和用户认知能力
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449439
B. Steichen, G. Carenini, C. Conati
{"title":"User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities","authors":"B. Steichen, G. Carenini, C. Conati","doi":"10.1145/2449396.2449439","DOIUrl":"https://doi.org/10.1145/2449396.2449439","url":null,"abstract":"Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82594134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Automatic and continuous user task analysis via eye activity 通过眼活动自动和连续的用户任务分析
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449406
Siyuan Chen, J. Epps, Fang Chen
{"title":"Automatic and continuous user task analysis via eye activity","authors":"Siyuan Chen, J. Epps, Fang Chen","doi":"10.1145/2449396.2449406","DOIUrl":"https://doi.org/10.1145/2449396.2449406","url":null,"abstract":"A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user's current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76876068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Helping users with information disclosure decisions: potential for adaptation 帮助用户做出信息披露决策:适应的潜力
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449448
Bart P. Knijnenburg, A. Kobsa
{"title":"Helping users with information disclosure decisions: potential for adaptation","authors":"Bart P. Knijnenburg, A. Kobsa","doi":"10.1145/2449396.2449448","DOIUrl":"https://doi.org/10.1145/2449396.2449448","url":null,"abstract":"Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be \"tracked\" by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users' willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user's gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80641701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Team reactions to voiced agent instructions in a pervasive game 在一个普遍的游戏中,团队对语音代理指令的反应
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449445
Stuart Moran, Nadia Pantidi, K. Bachour, J. Fischer, Martin Flintham, T. Rodden, Simon Evans, Simon Johnson
{"title":"Team reactions to voiced agent instructions in a pervasive game","authors":"Stuart Moran, Nadia Pantidi, K. Bachour, J. Fischer, Martin Flintham, T. Rodden, Simon Evans, Simon Johnson","doi":"10.1145/2449396.2449445","DOIUrl":"https://doi.org/10.1145/2449396.2449445","url":null,"abstract":"The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited.\u0000 As such, we developed a large-scale pervasive game called 'Cargo', where a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85990005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Towards cooperative brain-computer interfaces for space navigation 面向空间导航的协同脑机接口
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449417
R. Poli, C. Cinel, A. Matran-Fernandez, F. Sepulveda, A. Stoica
{"title":"Towards cooperative brain-computer interfaces for space navigation","authors":"R. Poli, C. Cinel, A. Matran-Fernandez, F. Sepulveda, A. Stoica","doi":"10.1145/2449396.2449417","DOIUrl":"https://doi.org/10.1145/2449396.2449417","url":null,"abstract":"We explored the possibility of controlling a spacecraft simulator using an analogue Brain-Computer Interface (BCI) for 2-D pointer control. This is a difficult task, for which no previous attempt has been reported in the literature. Our system relies on an active display which produces event-related potentials (ERPs) in the user's brain. These are analysed in real-time to produce control vectors for the user interface. In tests, users of the simulator were told to pass as close as possible to the Sun. Performance was very promising, on average users managing to satisfy the simulation success criterion in 67.5% of the runs. Furthermore, to study the potential of a collaborative approach to spacecraft navigation, we developed BCIs where the system is controlled via the integration of the ERPs of two users. Performance analysis indicates that collaborative BCIs produce trajectories that are statistically significantly superior to those obtained by single users.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89810948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Detecting boredom and engagement during writing with keystroke analysis, task appraisals, and stable traits 通过击键分析、任务评估和稳定特征来检测写作过程中的无聊和投入
IUI. International Conference on Intelligent User Interfaces Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449426
R. Bixler, S. D’Mello
{"title":"Detecting boredom and engagement during writing with keystroke analysis, task appraisals, and stable traits","authors":"R. Bixler, S. D’Mello","doi":"10.1145/2449396.2449426","DOIUrl":"https://doi.org/10.1145/2449396.2449426","url":null,"abstract":"It is hypothesized that the ability for a system to automatically detect and respond to users' affective states can greatly enhance the human-computer interaction experience. Although there are currently many options for affect detection, keystroke analysis offers several attractive advantages to traditional methods. In this paper, we consider the possibility of automatically discriminating between natural occurrences of boredom, engagement, and neutral by analyzing keystrokes, task appraisals, and stable traits of 44 individuals engaged in a writing task. The analyses explored several different arrangements of the data: using downsampled and/or standardized data; distinguishing between three different affect states or groups of two; and using keystroke/timing features in isolation or coupled with stable traits and/or task appraisals. The results indicated that the use of raw data and the feature set that combined keystroke/timing features with task appraisals and stable traits, yielded accuracies that were 11% to 38% above random guessing and generalized to new individuals. Applications of our affect detector for intelligent interfaces that provide engagement support during writing are discussed.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90405146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信