Proceedings of the 24th International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
Peripheral vision: a new killer app for smart glasses 周边视觉:智能眼镜的新杀手级应用
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302263
Ishan Chaturvedi, Farshid Hassani Bijarbooneh, Tristan Braud, Pan Hui
{"title":"Peripheral vision: a new killer app for smart glasses","authors":"Ishan Chaturvedi, Farshid Hassani Bijarbooneh, Tristan Braud, Pan Hui","doi":"10.1145/3301275.3302263","DOIUrl":"https://doi.org/10.1145/3301275.3302263","url":null,"abstract":"Most smart glasses have a small and limited field of view. The head-mounted display often spreads between the human central and peripheral vision. In this paper, we exploit this characteristic to display information in the peripheral vision of the user. We introduce a mobile peripheral vision model, which can be used on any smart glasses with a head-mounted display without any additional hardware requirement. This model taps into the blocked peripheral vision of a user and simplifies multi-tasking when using smart glasses. To display the potential applications of this model, we implement an application for indoor and outdoor navigation. We conduct an experiment on 20 people on both smartphone and smart glass to evaluate our model on indoor and outdoor conditions. Users report to have spent at least 50% less time looking at the screen by exploiting their peripheral vision with smart glass. 90% of the users Agree that using the model for navigation is more practical than standard navigation applications.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126885338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Induction of an active attitude by short speech reaction time toward interaction for decision-making with multiple agents 短言语反应时间对多主体决策互动的积极态度诱导
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302330
Y. Ohmoto, So Kumano, T. Nishida
{"title":"Induction of an active attitude by short speech reaction time toward interaction for decision-making with multiple agents","authors":"Y. Ohmoto, So Kumano, T. Nishida","doi":"10.1145/3301275.3302330","DOIUrl":"https://doi.org/10.1145/3301275.3302330","url":null,"abstract":"An interactive decision-making is useful to put our ambiguous desires into concrete through the interaction with others. However, in human-agent interaction, the agents are often not regarded as well-experienced consultants but rather as human-centered interfaces that provide information. We aimed to induce an active human attitude toward decision-making interactions with agents by controlling the speech reaction time (SRT) of the agents in order to consider the agents as reliable consultants. We conducted an experiment to investigate whether the SRT could influence the human participant's attitude. We used two kinds of agents; one had no SRT (no-SRT) and the other had a SRT of two seconds (2s-SRT). As a result, we found that the no-SRT agents could keep the participants' speech reaction times short even during the decision-making task in which the participants need time for careful consideration. In addition, from the analysis of the number of proposed categories and participant's behavior, we suggest that the participants had an active attitude toward interaction with no-SRT agents.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129515990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What data should I protect?: recommender and planning support for data security analysts 我应该保护哪些资料?:为数据安全分析师提供推荐和规划支持
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302294
Tianyi Li, G. Convertino, Ranjeet Kumar Tayi, Shima Kazerooni
{"title":"What data should I protect?: recommender and planning support for data security analysts","authors":"Tianyi Li, G. Convertino, Ranjeet Kumar Tayi, Shima Kazerooni","doi":"10.1145/3301275.3302294","DOIUrl":"https://doi.org/10.1145/3301275.3302294","url":null,"abstract":"Major breaches of sensitive company data, as for Facebook's 50 million user accounts in 2018 or Equifax's 143 million user accounts in 2017, are showing the limitations of reactive data security technologies. Companies and government organizations are turning to proactive data security technologies that secure sensitive data at source. However, data security analysts still face two fundamental challenges in data protection decisions: 1) the information overload from the growing number of data repositories and protection techniques to consider; 2) the optimization of protection plans given the current goals and available resources in the organization. In this work, we propose an intelligent user interface for security analysts that recommends what data to protect, visualizes simulated protection impact, and helps build protection plans. In a domain with limited access to expert users and practices, we elicited user requirements from security analysts in industry and modeled data risks based on architectural and conceptual attributes. Our preliminary evaluation suggests that the design improves the understanding and trust of the recommended protections and helps convert risk information in protection plans.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126125316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Prediction of music pairwise preferences from facial expressions 通过面部表情预测音乐配对偏好
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302266
M. Tkalcic, Nima Maleki, Matevž Pesek, Mehdi Elahi, F. Ricci, M. Marolt
{"title":"Prediction of music pairwise preferences from facial expressions","authors":"M. Tkalcic, Nima Maleki, Matevž Pesek, Mehdi Elahi, F. Ricci, M. Marolt","doi":"10.1145/3301275.3302266","DOIUrl":"https://doi.org/10.1145/3301275.3302266","url":null,"abstract":"Users of a recommender system may be requested to express their preferences about items either with evaluations of items (e.g. a rating) or with comparisons of item pairs. In this work we focus on the acquisition of pairwise preferences in the music domain. Asking the user to explicitly compare music, i.e., which, among two listened tracks, is preferred, requires some user effort. We have therefore developed a novel approach for automatically extracting these preferences from the analysis of the facial expressions of the users while listening to the compared tracks. We have trained a predictor that infers user's pairwise preferences by using features extracted from these data. We show that the predictor performs better than a commonly used baseline, which leverages the user's listening duration of the tracks to infer pairwise preferences. Furthermore, we show that there are differences in the accuracy of the proposed method between users with different personalities and we have therefore adapted the trained model accordingly. Our work shows that by introducing a low user effort preference elicitation approach, which, however, requires to access information that may raise potential privacy issues (face expression), one can obtain good prediction accuracy of pairwise music preferences.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123775954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
I can do better than your AI: expertise and explanations 我可以比你的人工智能做得更好:专业知识和解释
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302308
J. Schaffer, J. O'Donovan, James R. Michaelis, A. Raglin, Tobias Höllerer
{"title":"I can do better than your AI: expertise and explanations","authors":"J. Schaffer, J. O'Donovan, James R. Michaelis, A. Raglin, Tobias Höllerer","doi":"10.1145/3301275.3302308","DOIUrl":"https://doi.org/10.1145/3301275.3302308","url":null,"abstract":"Intelligent assistants, such as navigation, recommender, and expert systems, are most helpful in situations where users lack domain knowledge. Despite this, recent research in cognitive psychology has revealed that lower-skilled individuals may maintain a sense of illusory superiority, which might suggest that users with the highest need for advice may be the least likely to defer judgment. Explanation interfaces - a method for persuading users to take a system's advice - are thought by many to be the solution for instilling trust, but do their effects hold for self-assured users? To address this knowledge gap, we conducted a quantitative study (N=529) wherein participants played a binary decision-making game with help from an intelligent assistant. Participants were profiled in terms of both actual (measured) expertise and reported familiarity with the task concept. The presence of explanations, level of automation, and number of errors made by the intelligent assistant were manipulated while observing changes in user acceptance of advice. An analysis of cognitive metrics lead to three findings for research in intelligent assistants: 1) higher reported familiarity with the task simultaneously predicted more reported trust but less adherence, 2) explanations only swayed people who reported very low task familiarity, and 3) showing explanations to people who reported more task familiarity led to automation bias.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130077581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Assisting group activity analysis through hand detection and identification in multiple egocentric videos 在多个以自我为中心的视频中,通过手部检测和识别来协助团队活动分析
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302297
Nathawan Charoenkulvanich, Rie Kamikubo, Ryo Yonetani, Yoichi Sato
{"title":"Assisting group activity analysis through hand detection and identification in multiple egocentric videos","authors":"Nathawan Charoenkulvanich, Rie Kamikubo, Ryo Yonetani, Yoichi Sato","doi":"10.1145/3301275.3302297","DOIUrl":"https://doi.org/10.1145/3301275.3302297","url":null,"abstract":"Research in group activity analysis has put attention to monitor the work and evaluate group and individual performance, which can be reflected towards potential improvements in future group interactions. As a new means to examine individual or joint actions in the group activity, our work investigates the potential of detecting and disambiguating hands of each person in first-person points-of-view videos. Based on the recent developments in automated hand-region extraction from videos, we develop a new multiple-egocentric-video browsing interface that gives easy access to the frames of 1) individual action when only the hands of the viewer are detected, 2) joint action when collective hands are detected, and 3) the viewer checking the others' action as only their hands are detected. We take the evaluation process to explore the effectiveness of our interface with proposed hand-related features which can help perceive actions of interests in the complex analysis of videos involving co-occurred behaviors of multiple people.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127988795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Progressive disclosure: empirically motivated approaches to designing effective transparency 渐进式披露:设计有效透明度的经验激励方法
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302322
Aaron Springer, S. Whittaker
{"title":"Progressive disclosure: empirically motivated approaches to designing effective transparency","authors":"Aaron Springer, S. Whittaker","doi":"10.1145/3301275.3302322","DOIUrl":"https://doi.org/10.1145/3301275.3302322","url":null,"abstract":"As we increasingly delegate important decisions to intelligent systems, it is essential that users understand how algorithmic decisions are made. Prior work has often taken a technocentric approach to transparency. In contrast, we explore empirical user-centric methods to better understand user reactions to transparent systems. We assess user reactions to transparency in two studies. In Study 1, users anticipated that a more transparent system would perform better, but retracted this evaluation after experience with the system. Qualitative data suggest this arose because transparency is distracting and undermines simple heuristics users form about system operation. Study 2 explored these effects in depth, suggesting that users may benefit from initially simplified feedback that hides potential system errors and assists users in building working heuristics about system operation. We use these findings to motivate new progressive disclosure principles for transparency in intelligent systems.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122518401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images 解释和算法精度对艺术图像视觉推荐系统的影响
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302274
Vicente Dominguez, Pablo Messina, Ivania Donoso-Guzmán, Denis Parra
{"title":"The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images","authors":"Vicente Dominguez, Pablo Messina, Ivania Donoso-Guzmán, Denis Parra","doi":"10.1145/3301275.3302274","DOIUrl":"https://doi.org/10.1145/3301275.3302274","url":null,"abstract":"There are very few works about explaining content-based recommendations of images in the artistic domain. Current works do not provide a perspective of the many variables involved in the user perception of several aspects of the system such as domain knowledge, relevance, explainability, and trust. In this paper, we aim to fill this gap by studying three interfaces, with different levels of explainability, for artistic image recommendation. Our experiments with N=121 users confirm that explanations of recommendations in the image domain are useful and increase user satisfaction, perception of explainability and relevance. Furthermore, our results show that the observed effects are also dependent on the underlying recommendation algorithm used. We tested two algorithms: Deep Neural Networks (DNN), which has high accuracy, and Attractiveness Visual Features (AVF) with high transparency but lower accuracy. Our results indicate that algorithms should not be studied in isolation, but rather in conjunction with interfaces, since both play a significant role in the perception of explainability and trust for image recommendation. Finally, using the framework by Knijnenburg et al., we provide a comprehensive model which synthesizes the effects between different variables involved in the user experience with explainable visual recommender systems of artistic images.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"54 4-5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114059059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Do I trust my machine teammate?: an investigation from perception to decision 我信任我的机器队友吗?从感知到决定的调查
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302277
Kun Yu, S. Berkovsky, R. Taib, Jianlong Zhou, Fang Chen
{"title":"Do I trust my machine teammate?: an investigation from perception to decision","authors":"Kun Yu, S. Berkovsky, R. Taib, Jianlong Zhou, Fang Chen","doi":"10.1145/3301275.3302277","DOIUrl":"https://doi.org/10.1145/3301275.3302277","url":null,"abstract":"In the human-machine collaboration context, understanding the reason behind each human decision is critical for interpreting the performance of the human-machine team. Via an experimental study of a system with varied levels of accuracy, we describe how human trust interplays with system performance, human perception and decisions. It is revealed that humans are able to perceive the performance of automatic systems and themselves, and adjust their trust levels according to the accuracy of systems. The 70% system accuracy suggests to be a threshold between increasing and decreasing human trust and system usage. We have also shown that trust can be derived from a series of users' decisions rather than from a single one, and relates to the perceptions of users. A general framework depicting how trust and perception affect human decision making is proposed, which can be used as future guidelines for human-machine collaboration design.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127000593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Popup 弹出
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302305
Jean Y. Song, Stephan J. Lemmer, M. Liu, Shiyan Yan, Juho Kim, Jason J. Corso, Walter S. Lasecki
{"title":"Popup","authors":"Jean Y. Song, Stephan J. Lemmer, M. Liu, Shiyan Yan, Juho Kim, Jason J. Corso, Walter S. Lasecki","doi":"10.1145/3301275.3302305","DOIUrl":"https://doi.org/10.1145/3301275.3302305","url":null,"abstract":"Collecting a sufficient amount of 3D training data for autonomous vehicles to handle rare, but critical, traffic events (e.g., collisions) may take decades of deployment. Abundant video data of such events from municipal traffic cameras and video sharing sites (e.g., YouTube) could provide a potential alternative, but generating realistic training data in the form of 3D video reconstructions is a challenging task beyond the current capabilities of computer vision. Crowdsourcing the annotation of necessary information could bridge this gap, but the level of accuracy required to obtain usable reconstructions makes this task nearly impossible for non-experts. In this paper, we propose a novel hybrid intelligence method that combines annotations from workers viewing different instances (video frames) of the same target (3D object), and uses particle filtering to aggregate responses. Our approach can leveraging temporal dependencies between video frames, enabling higher quality through more aggressive filtering. The proposed method results in a 33% reduction in the relative error of position estimation compared to a state-of-the-art baseline. Moreover, our method enables skipping (self-filtering) challenging annotations, reducing the total annotation time for hard-to-annotate frames by 16%. Our approach provides a generalizable means of aggregating more accurate crowd responses in settings where annotation is especially challenging or error-prone.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121485513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信