Proceedings of the 24th International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
Analyzing user's task-driven interaction in mixed reality 分析混合现实中用户的任务驱动交互
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302286
S. Kleanthous, Elena Matsi
{"title":"Analyzing user's task-driven interaction in mixed reality","authors":"S. Kleanthous, Elena Matsi","doi":"10.1145/3301275.3302286","DOIUrl":"https://doi.org/10.1145/3301275.3302286","url":null,"abstract":"Mixed reality (MR) provides exciting interaction approaches in several applications. The user experience of interacting in these visually rich environments depends highly on the way the user perceives, processes, and comprehends visual information. In this work we are investigating the differences between Field Dependent - Field Independent users towards their interaction behavior in a MR environment when they were asked to perform a specific task. A study was conducted using Microsoft HoloLens device in which participants interacted with a popular HoloLens application, modified by the authors to log user interaction data in real time. Analysis of the results demonstrates the differences in the visual processing of information, especially in visually complex environments and the impact on the user's interaction behavior.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"100 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132270708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainability scenarios: towards scenario-based XAI design 可解释性场景:面向基于场景的XAI设计
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302317
Christine T. Wolf
{"title":"Explainability scenarios: towards scenario-based XAI design","authors":"Christine T. Wolf","doi":"10.1145/3301275.3302317","DOIUrl":"https://doi.org/10.1145/3301275.3302317","url":null,"abstract":"Integral to the adoption and uptake of AI systems in real-world settings is the ability for people to make sense of and evaluate such systems, a growing area of development and design efforts known as XAI (Explainable AI). Recent work has advanced the state of the art, yet a key challenge remains in understanding unique requirements that might arise when XAI systems are deployed into complex settings of use. In helping envision such requirements, this paper turns to scenario-based design, a method that anticipates and leverages scenarios of possible use early on in system development. To demonstrate the value of the scenario-based design method to XAI design, this paper presents a case study of aging-in-place monitoring. Introducing the concept of \"explainability scenarios\" as resources in XAI design, this paper sets out a forward-facing agenda for further attention to the emergent requirements of explainability-in-use.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127048149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Background perception and comprehension of symbols conveyed through vibrotactile wearable displays 振动触觉可穿戴显示器传达符号的背景感知与理解
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302282
Granit Luzhnica, Eduardo Veas
{"title":"Background perception and comprehension of symbols conveyed through vibrotactile wearable displays","authors":"Granit Luzhnica, Eduardo Veas","doi":"10.1145/3301275.3302282","DOIUrl":"https://doi.org/10.1145/3301275.3302282","url":null,"abstract":"Previous research has demonstrated the feasibility of conveying vibrotactile encoded information efficiently using wearable devices. Users can understand vibrotactile encoded symbols and complex messages combining such symbols. Such wearable devices can find applicability in many multitasking use cases. Nevertheless, for multitasking, it would be necessary for the perception and comprehension of vibrotactile information to be less attention demanding and not interfere with other parallel tasks. We present a user study which investigates whether high speed vibrotactile encoded messages can be perceived in the background while performing other concurrent attention-demanding primary tasks. The vibrotactile messages used in the study were limited to symbols representing letters of English Alphabet. We observed that users could very accurately comprehend vibrotactile such encoded messages in the background and other parallel tasks did not affect users performance. Additionally, the comprehension of such messages did also not affect the performance of the concurrent primary task as well. Our results promote the use of vibrotactile information transmission to facilitate multitasking.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129034508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
When people and algorithms meet: user-reported problems in intelligent everyday applications 当人和算法相遇:智能日常应用中用户报告的问题
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302262
Malin Eiband, S. Völkel, Daniel Buschek, Sophia Cook, H. Hussmann
{"title":"When people and algorithms meet: user-reported problems in intelligent everyday applications","authors":"Malin Eiband, S. Völkel, Daniel Buschek, Sophia Cook, H. Hussmann","doi":"10.1145/3301275.3302262","DOIUrl":"https://doi.org/10.1145/3301275.3302262","url":null,"abstract":"The complex nature of intelligent systems motivates work on supporting users during interaction, for example through explanations. However, there is yet little empirical evidence on specific problems users face in such systems in everyday use. This paper investigates such problems as reported by users: We analysed 35,448 reviews of three apps on the Google Play Store (Facebook, Netflix and Google Maps) with sentiment analysis and topic modelling to reveal problems during interaction that can be attributed to the apps' algorithmic decision-making. We enriched this data with users' coping and support strategies through a follow-up online survey (N=286). In particular, we found problems and strategies related to content, algorithm, user choice, and feedback. We discuss corresponding implications for designing user support, highlighting the importance of user control and explanations of output, not processes. Our work thus contributes empirical evidence to facilitate understanding of users' everyday problems with intelligent systems.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121785336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Walking with adaptive augmented reality workspaces: design and usage patterns 适应增强现实工作空间:设计和使用模式
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302278
W. Lages, D. Bowman
{"title":"Walking with adaptive augmented reality workspaces: design and usage patterns","authors":"W. Lages, D. Bowman","doi":"10.1145/3301275.3302278","DOIUrl":"https://doi.org/10.1145/3301275.3302278","url":null,"abstract":"Mobile augmented reality may eventually replace our smartphones as the primary way of accessing information on the go. However, current interfaces provide little support to walking and to the variety of actions we perform in the real world. To achieve its full potential, augmented reality interfaces must support the fluid way we move and interact in the physical world. We explored how different adaptation strategies can contribute towards this goal. We evaluated design alternatives through contextual studies and identified the key interaction patterns that interfaces for walking should support. We also identified desirable properties of adaptation-based interface techniques, which can be used to guide the design of the next-generation walking-centered augmented reality workspaces.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
SAM: a modular framework for self-adapting web menus SAM:用于自适应web菜单的模块化框架
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-01-24 DOI: 10.1145/3301275.3302314
Camille Gobert, Kashyap Todi, G. Bailly, Antti Oulasvirta
{"title":"SAM: a modular framework for self-adapting web menus","authors":"Camille Gobert, Kashyap Todi, G. Bailly, Antti Oulasvirta","doi":"10.1145/3301275.3302314","DOIUrl":"https://doi.org/10.1145/3301275.3302314","url":null,"abstract":"This paper presents SAM, a modular and extensible JavaScript framework for self-adapting menus on webpages. SAM allows control of two elementary aspects for adapting web menus: (1) the target policy, which assigns scores to menu items for adaptation, and (2) the adaptation style, which specifies how they are adapted on display. By decoupling them, SAM enables the exploration of different combinations independently. Several policies from literature are readily implemented, and paired with adaptation styles such as reordering and highlighting. The process---including user data logging---is local, offering privacy benefits and eliminating the need for server-side modifications. Researchers can use SAM to experiment adaptation policies and styles, and benchmark techniques in an ecological setting with real webpages. Practitioners can make websites self-adapting, and end-users can dynamically personalise typically static web menus.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129045422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Explaining models: an empirical study of how explanations impact fairness judgment 解释模型:解释如何影响公平判断的实证研究
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-01-23 DOI: 10.1145/3301275.3302310
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Casey Dugan
{"title":"Explaining models: an empirical study of how explanations impact fairness judgment","authors":"Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Casey Dugan","doi":"10.1145/3301275.3302310","DOIUrl":"https://doi.org/10.1145/3301275.3302310","url":null,"abstract":"Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased, and user-friendly explanations that people can confidently rely on. Towards that end, we conducted an empirical study with four types of programmatically generated explanations to understand how they impact people's fairness judgments of ML systems. With an experiment involving more than 160 Mechanical Turk workers, we show that: 1) Certain explanations are considered inherently less fair, while others can enhance people's confidence in the fairness of the algorithm; 2) Different fairness problems-such as model-wide fairness issues versus case-specific fairness discrepancies-may be more effectively exposed through different styles of explanation; 3) Individual differences, including prior positions and judgment criteria of algorithmic fairness, impact how people react to different styles of explanation. We conclude with a discussion on providing personalized and adaptive explanations to support fairness judgments of ML systems.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122066089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Automated rationale generation: a technique for explainable AI and its effects on human perceptions 自动原理生成:一种可解释的人工智能及其对人类感知的影响的技术
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-01-11 DOI: 10.1145/3301275.3302316
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark O. Riedl
{"title":"Automated rationale generation: a technique for explainable AI and its effects on human perceptions","authors":"Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark O. Riedl","doi":"10.1145/3301275.3302316","DOIUrl":"https://doi.org/10.1145/3301275.3302316","url":null,"abstract":"Automated rationale generation is an approach for real-time explanation generation whereby a computational model learns to translate an autonomous agent's internal state and action data representations into natural language. Training on human explanation data can enable agents to learn to generate human-like explanations for their behavior. In this paper, using the context of an agent that plays Frogger, we describe (a) how to collect a corpus of explanations, (b) how to train a neural rationale generator to produce different styles of rationales, and (c) how people perceive these rationales. We conducted two user studies. The first study establishes the plausibility of each type of generated rationale and situates their user perceptions along the dimensions of confidence, humanlike-ness, adequate justification, and understandability. The second study further explores user preferences between the generated rationales with regard to confidence in the autonomous agent, communicating failure and unexpected behavior. Overall, we find alignment between the intended differences in features of the generated rationales and the perceived differences by users. Moreover, context permitting, participants preferred detailed rationales to form a stable mental model of the agent's behavior.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131183870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 178
Smell Pittsburgh: community-empowered mobile smell reporting system 气味匹兹堡:社区授权的移动气味报告系统
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2018-10-25 DOI: 10.1145/3301275.3302293
Yen-Chia Hsu, Jennifer L. Cross, P. Dille, Michael Tasota, Beatrice Dias, Randy Sargent, Ting-Hao 'Kenneth' Huang, I. Nourbakhsh
{"title":"Smell Pittsburgh: community-empowered mobile smell reporting system","authors":"Yen-Chia Hsu, Jennifer L. Cross, P. Dille, Michael Tasota, Beatrice Dias, Randy Sargent, Ting-Hao 'Kenneth' Huang, I. Nourbakhsh","doi":"10.1145/3301275.3302293","DOIUrl":"https://doi.org/10.1145/3301275.3302293","url":null,"abstract":"Urban air pollution has been linked to various human health considerations, including cardiopulmonary diseases. Communities who suffer from poor air quality often rely on experts to identify pollution sources due to the lack of accessible tools. Taking this into account, we developed Smell Pittsburgh, a system that enables community members to report odors and track where these odors are frequently concentrated. All smell report data are publicly accessible online. These reports are also sent to the local health department and visualized on a map along with air quality data from monitoring stations. This visualization provides a comprehensive overview of the local pollution landscape. Additionally, with these reports and air quality data, we developed a model to predict upcoming smell events and send push notifications to inform communities. Our evaluation of this system demonstrates that engaging residents in documenting their experiences with pollution odors can help identify local air pollution patterns, and can empower communities to advocate for better air quality.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134159871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
What can AI do for me?: evaluating machine learning interpretations in cooperative play 人工智能能为我做什么?:评估机器学习在合作游戏中的解释
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2018-10-23 DOI: 10.1145/3301275.3302265
Shi Feng, Jordan L. Boyd-Graber
{"title":"What can AI do for me?: evaluating machine learning interpretations in cooperative play","authors":"Shi Feng, Jordan L. Boyd-Graber","doi":"10.1145/3301275.3302265","DOIUrl":"https://doi.org/10.1145/3301275.3302265","url":null,"abstract":"Machine learning is an important tool for decision making, but its ethical and responsible application requires rigorous vetting of its interpretability and utility: an understudied problem, particularly for natural language processing models. We propose an evaluation of interpretation on a real task with real human users, where the effectiveness of interpretation is measured by how much it improves human performance. We design a grounded, realistic human-computer cooperative setting using a question answering task, Quizbowl. We recruit both trivia experts and novices to play this game with computer as their teammate, who communicates its prediction via three different interpretations. We also provide design guidance for natural language processing human-in-the-loop settings.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"9 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117011895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信