Proceedings of the 22nd International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
WikiLyzer: Interactive Information Quality Assessment in Wikipedia WikiLyzer:维基百科中的交互式信息质量评估
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025201
Cecilia di Sciascio, D. Strohmaier, M. Errecalde, Eduardo Veas
{"title":"WikiLyzer: Interactive Information Quality Assessment in Wikipedia","authors":"Cecilia di Sciascio, D. Strohmaier, M. Errecalde, Eduardo Veas","doi":"10.1145/3025171.3025201","DOIUrl":"https://doi.org/10.1145/3025171.3025201","url":null,"abstract":"Digital libraries and services enable users to access large amounts of data on demand. Yet, quality assessment of information encountered on the Internet remains an elusive open issue. For example, Wikipedia, one of the most visited platforms on the Web, hosts thousands of user-generated articles and undergoes 12 million edits/contributions per month. User-generated content is undoubtedly one of the keys to its success, but also a hindrance to good quality: contributions can be of poor quality because anyone, even anonymous users, can participate. Though Wikipedia has defined guidelines as to what makes the perfect article, authors find it difficult to assert whether their contributions comply with them and reviewers cannot cope with the ever growing amount of articles pending review. Great efforts have been invested in algorithmic methods for automatic classification of Wikipedia articles (as featured or non-featured) and for quality flaw detection. However, little has been done to support quality assessment of user-generated content through interactive tools that combine automatic methods and human intelligence. We developed WikiLyzer, a Web toolkit comprising three interactive applications designed to assist (i) knowledge discovery experts in creating and testing metrics for quality measurement, (ii) Wikipedia users searching for good articles, and (iii) Wikipedia authors that need to identify weaknesses to improve a particular article. A design study sheds a light on how experts could create complex quality metrics with our tool, while a user study reports on its usefulness to identify high-quality content.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116594665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Utilizing Human Cognitive and Emotional Factors for User-Centered Computing 利用人类认知和情感因素进行以用户为中心的计算
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3026366
G. Samaras
{"title":"Utilizing Human Cognitive and Emotional Factors for User-Centered Computing","authors":"G. Samaras","doi":"10.1145/3025171.3026366","DOIUrl":"https://doi.org/10.1145/3025171.3026366","url":null,"abstract":"Intelligent interactive systems should not ignore the individuality of the user. The \"one-size-fits-all\" approach, especially in user interaction, is not appropriate when user satisfaction and acceptability is a primary goal. Each user has unique human cognitive processing styles and abilities. In addition, emotions change over time, which possibly affect the user's cognitive state and the overall interaction process. Unsurprisingly, the users' ability to control their emotions is another essential factor in adapting user interfaces, applications and data delivery. How can an interactive system adapt to human cognitive and emotional factors with the aim to deliver a personalized and more usable interface? Is there a user interface to an application or system that is equally effective to all types of users? How can we place the human in the center of every day's interaction and task activity? This keynote speech will present some approaches that our work at the DMAC Lab/SCRAT Group has addressed on how individual differences in human cognitive processing and emotional factors place the user in the center of every day interaction.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129952280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Automating Data Narratives 迈向自动化数据叙述
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025193
Y. Gil, D. Garijo
{"title":"Towards Automating Data Narratives","authors":"Y. Gil, D. Garijo","doi":"10.1145/3025171.3025193","DOIUrl":"https://doi.org/10.1145/3025171.3025193","url":null,"abstract":"We propose a new area of research on automating data narratives. Data narratives are containers of information about computationally generated research findings. They have three major components: 1) A record of events, that describe a new result through a workflow and/or provenance of all the computations executed; 2) Persistent entries for key entities involved for data, software versions, and workflows; 3) A set of narrative accounts that are automatically generated human-consumable renderings of the record and entities and can be included in a paper. Different narrative accounts can be used for different audiences with different content and details, based on the level of interest or expertise of the reader. Data narratives can make science more transparent and reproducible, because they ensure that the text description of the computational experiment reflects with high fidelity what was actually done. Data narratives can be incorporated in papers, either in the methods section or as supplementary materials. We introduce DANA, a prototype that illustrates how to generate data narratives automatically, and describe the information it uses from the computational records. We also present a formative evaluation of our approach and discuss potential uses of automated data narratives.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130727325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Social Intelligence Modeling using Wearable Devices 使用可穿戴设备的社会智能建模
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025195
A. Mihoub, G. Lefebvre
{"title":"Social Intelligence Modeling using Wearable Devices","authors":"A. Mihoub, G. Lefebvre","doi":"10.1145/3025171.3025195","DOIUrl":"https://doi.org/10.1145/3025171.3025195","url":null,"abstract":"Social Signal Processing techniques have given the opportunity to analyze in-depth human behavior in social face-to-face interactions. With recent advancements, it is henceforth possible to use these techniques to augment social interactions, especially the human behavior in oral presentations. The goal of this paper is to train a computational model able to provide a relevant feedback to a public speaker concerning his coverbal communication. Hence, the role of this model is to augment the social intelligence of the orator and then the relevance of his presentation. To this end, we present an original interaction setting in which the speaker is equipped with only wearable devices. Several coverbal modalities have been extracted and automatically annotated namely speech volume, intonation, speech rate, eye gaze, hand gestures and body movements. An offline report was addressed to participants containing the performance scores on the overall modalities. In addition, a post-experiment study was conducted to collect participant's opinions on many aspects of the studied interaction and the results were rather positive. Moreover, we annotated recommended feedbacks for each presentation session, and to retrieve these annotations, a Dynamic Bayesian Network model was trained using as inputs the multimodal performance scores. We will show that our assessment behavior model presents good performances compared to other models.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133366741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An Intelligent Interface for Organizing Online Opinions on Controversial Topics 一个组织网上争议话题意见的智能界面
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025230
Mingkun Gao, H. Do, W. Fu
{"title":"An Intelligent Interface for Organizing Online Opinions on Controversial Topics","authors":"Mingkun Gao, H. Do, W. Fu","doi":"10.1145/3025171.3025230","DOIUrl":"https://doi.org/10.1145/3025171.3025230","url":null,"abstract":"An enormous amount of posts and comments are shared in online social forums, which often organize these online social opinions based on semantic contents. However, for controversial topics, people with different attitudes and stances often have very distinct perspectives, reactions, and emotions to the same post. Organization by semantic contents often encourages selective exposure to information, which may exacerbate opinion polarization. To address this problem, we design a novel interface that allows people to better understand and appreciate people with different stances in social forums. Our interface was developed to allow interactive visualization and categorization of original posts about a controversial topic with crowd workers' reactions and emotions from different stances. We evaluated the interface using Reddit posts about US presidential candidates. Results demonstrate that the interface can mitigate selective exposure and help users to adopt a broader spectrum of opinions than the traditional Reddit interface.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124725830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Towards Fine-Grained Adaptation of Exploration/Exploitation in Information Retrieval 信息检索中探索/利用的细粒度适应
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025205
A. Medlar, J. Pyykkö, D. Glowacka
{"title":"Towards Fine-Grained Adaptation of Exploration/Exploitation in Information Retrieval","authors":"A. Medlar, J. Pyykkö, D. Glowacka","doi":"10.1145/3025171.3025205","DOIUrl":"https://doi.org/10.1145/3025171.3025205","url":null,"abstract":"Lookup and exploratory search tasks can be distinguished using individuals' information search behaviour. Previous work, however, has treated these search tasks as belonging to homogeneous categories, ignoring the specific information needs between users and even between search sessions for the same user. In this work, we avoid this dichotomy by considering each search task to exist on a spectrum between lookup and exploratory. In doing so, our approach aims to dynamically adapt exploration and exploitation in a manner commensurate with the user's individual requirements for each search session. We present a novel study design together with a regression model for predicting the optimal exploration rate based on simple metrics from the first iteration, such as clicks and reading time, that can be collected without special hardware. We perform model selection based on the data collected from a user study and show that predictions are consistent with user feedback.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124024267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
OptiDwell: Intelligent Adjustment of Dwell Click Time OptiDwell:智能调整停留点击时间
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025202
Aanand Nayyar, Utkarsh Dwivedi, Karan Ahuja, Nitendra Rajput, Seema Nagar, K. Dey
{"title":"OptiDwell: Intelligent Adjustment of Dwell Click Time","authors":"Aanand Nayyar, Utkarsh Dwivedi, Karan Ahuja, Nitendra Rajput, Seema Nagar, K. Dey","doi":"10.1145/3025171.3025202","DOIUrl":"https://doi.org/10.1145/3025171.3025202","url":null,"abstract":"Gaze based navigation with digital screens offer a hands-free and touchless interaction, which is often useful in providing a hygienic interaction experience in a public kiosk scenario. The goodness of such a navigation system depends not only on the accuracy of detecting the eye gaze but also on the ability to determine whether a user is interested in clicking a button or is just looking at the button. The time for which a user needs to gaze at a particular button before it is considered as a click action is called the dwell time. In this paper, we explore intelligent adjustment of dwell times, where mouse click events on the buttons of a given application are emulated with user gaze. A constant dwell-time for all buttons and for all users may not provide an efficient and intuitive interface. We thereby propose a model to dynamically adjust dwell-time values used to emulate user mouse click events, exploiting the user's experience with different portions of a given application. The adjustment happens at a per-user, per-button granularity, as a function of the user's (a) prior usage experience of the given button within the application and (b) Midas touch characteristics for the given button. We propose OptiDwell, inspired by the action-value method based solutions to the Multi-Armed Bandits problem, for dwell click time adaptation. We experiment OptiDwell using an interactive TV channel browsing interface application, constituting of a mix of text and image buttons, over 10 computer-savvy users generating over 9000 click tasks. We observe significant improvement of user comfort level over the sessions, quantified by (a) improved (reduced) dwell times and (b) reduced number of Midas touches in spite of faster dwell-clicks, as high as 10-fold reduction in the best case. Our work is useful for creating an interface, with accurate, fast and comfortable dwell-clicks for each interface element (e.g., buttons), and each user.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124151598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
A Network-Fusion Guided Dashboard Interface for Task-Centric Document Curation 用于以任务为中心的文档管理的网络融合引导仪表板界面
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025177
Paul Jones, Shivani Sharma, Changsung Moon, N. Samatova
{"title":"A Network-Fusion Guided Dashboard Interface for Task-Centric Document Curation","authors":"Paul Jones, Shivani Sharma, Changsung Moon, N. Samatova","doi":"10.1145/3025171.3025177","DOIUrl":"https://doi.org/10.1145/3025171.3025177","url":null,"abstract":"Knowledge workers are being exposed to more information than ever before, as well as having to work in multi-tasking and collaborative environments. There is an increasing need for interfaces and algorithms to help automatically keep track of documents that are associated with both individual and team tasks. Previous approaches to the problem of automatically applying task labels to documents have been limited to small feature spaces or have not taken into account multi-user environments. Many different clues to potential task associations are available through user, task and document similarity metrics, as well as through temporal patterns in individual and team workflows. We present a network-fusion algorithm for automatic task-centric document curation, and show how this can guide a recent-work dashboard interface, which organizes user's documents and gathers feedback from them. Our approach efficiently computes representations of users, tasks and documents in a common vector space, and can easily take into account many different types of associations through the creation of edges in a multi-layer graph. We have demonstrated the effectiveness of this approach using labelled document corpora from three empirical studies with students and intelligence analysts. We have also shown how to leverage relationships between different entity types to increase classification accuracy by up to 20% over a simpler baseline, and with as little as 10% labelled data.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130614832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Eyes Understand the Sketch!: Gaze-Aided Stroke Grouping of Hand-Drawn Flowcharts 眼睛能看懂草图!:手绘流程图的注视辅助笔画分组
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025220
Beibei Chao, Xiaoyan Zhao, Dapeng Shi, Guihuan Feng, Bin Luo
{"title":"Eyes Understand the Sketch!: Gaze-Aided Stroke Grouping of Hand-Drawn Flowcharts","authors":"Beibei Chao, Xiaoyan Zhao, Dapeng Shi, Guihuan Feng, Bin Luo","doi":"10.1145/3025171.3025220","DOIUrl":"https://doi.org/10.1145/3025171.3025220","url":null,"abstract":"Stroke grouping in sketch recognition is both difficult and time-consuming. Our preliminary experiment indicates that, when people drawing flowcharts, their gaze focused on non-arrow areas, which providing a spatial cue for stroke grouping. Therefore, we present a novel stroke grouping method aided by gaze information. Based on gaze data that is collected simultaneously during natural drawing process, we generate hotspot areas serving as the position reference of semantic symbols. Strokes are first roughly grouped by the hotspot areas, so as to efficiently decrease the searching space. Experiment on a dataset of 54 flowcharts shows that time efficiency of stroke grouping can be greatly improved in our method and there is much potential for introducing eye-gaze data in sketch recognition.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123696965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Don't Just Swipe Left, Tell Me Why: Enhancing Gesture-based Feedback with Reason Bins 不要只是向左滑动,告诉我为什么:用理性箱增强基于手势的反馈
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025212
J. F. Beltran, Ziqi Huang, A. Abouzeid, Arnab Nandi
{"title":"Don't Just Swipe Left, Tell Me Why: Enhancing Gesture-based Feedback with Reason Bins","authors":"J. F. Beltran, Ziqi Huang, A. Abouzeid, Arnab Nandi","doi":"10.1145/3025171.3025212","DOIUrl":"https://doi.org/10.1145/3025171.3025212","url":null,"abstract":"Despite several advances in information retrieval systems and user interfaces, the specification of queries over text-based document collections remains a challenging problem. Query specification with keywords is a popular solution. However, given the widespread adoption of gesture-driven interfaces such as multitouch technologies in smartphones and tablets, the lack of a physical keyboard makes query specification with keywords inconvenient. We present BinGO, a novel gestural approach to querying text databases that allows users to refine their queries using a swipe gesture to either \"like\" or \"dislike\" candidate documents as well as express the reasons they like or dislike a document by swiping through automatically generated \"reason bins\". Such reasons refine a user's query with additional keywords. We present an online and efficient bin generation algorithm that presents reason bins at gesture articulation. We motivate and describe BinGo's unique interface design choices. Based on our analysis and user studies, we demonstrate that query specification by swiping through reason bins is easy and expressive.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124079671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信