Proceedings of the 24th International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
ShopEye
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302299
Qian Zhang, Dong Wang, Run Zhao, Yufeng Deng, Yinggang Yu
{"title":"ShopEye","authors":"Qian Zhang, Dong Wang, Run Zhao, Yufeng Deng, Yinggang Yu","doi":"10.1145/3301275.3302299","DOIUrl":"https://doi.org/10.1145/3301275.3302299","url":null,"abstract":"Smart retail stores open new possibilities for enabling a variety of physical analytics, such as users' shopping trajectories and preferences for certain items. This paper aims to excavate three kinds of relations in physical stores, i.e. user-item, user-user and item-item, which provide abundant information for enhancing users' shopping experiences and boosting retailers' sales. We present ShopEye, a hybrid RFID and smartwatch system to delve into these relations in an implicit and non-intrusive manner. The intuition is that inertial sensors embedded in smartwatches and RFID tags attached to items can capture the user behaviors and the item motions, respectively. ShopEye first pairs users with corresponding items according to correlations between inertial signals and RFID signals, and then incorporates these pairs with the motion behaviors of users to further profile user-user and item-item relations. We have tested the system extensively in our lab environment which mimics the real retail store. Experimental results demonstrate the effectiveness and robustness of ShopEye in excavating these relations.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121248573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Personalized explanations for hybrid recommender systems 混合推荐系统的个性化解释
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302306
Pigi Kouki, J. Schaffer, J. Pujara, J. O'Donovan, L. Getoor
{"title":"Personalized explanations for hybrid recommender systems","authors":"Pigi Kouki, J. Schaffer, J. Pujara, J. O'Donovan, L. Getoor","doi":"10.1145/3301275.3302306","DOIUrl":"https://doi.org/10.1145/3301275.3302306","url":null,"abstract":"Recommender systems have become pervasive on the web, shaping the way users see information and thus the decisions they make. As these systems get more complex, there is a growing need for transparency. In this paper, we study the problem of generating and visualizing personalized explanations for hybrid recommender systems, which incorporate many different data sources. We build upon a hybrid probabilistic graphical model and develop an approach to generate real-time recommendations along with personalized explanations. To study the benefits of explanations for hybrid recommender systems, we conduct a crowd-sourced user study where our system generates personalized recommendations and explanations for real users of the last.fm music platform. We experiment with 1) different explanation styles (e.g., user-based, item-based), 2) manipulating the number of explanation styles presented, and 3) manipulating the presentation format (e.g., textual vs. visual). We apply a mixed model statistical analysis to consider user personality traits as a control variable and demonstrate the usefulness of our approach in creating personalized hybrid explanations with different style, number, and format.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"645 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122955371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 119
Photo sleuth: combining human expertise and face recognition to identify historical portraits 照片侦探:结合人类专业知识和面部识别来识别历史肖像
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302301
V. Mohanty, D. Thames, Sneha Mehta, Kurt Luther
{"title":"Photo sleuth: combining human expertise and face recognition to identify historical portraits","authors":"V. Mohanty, D. Thames, Sneha Mehta, Kurt Luther","doi":"10.1145/3301275.3302301","DOIUrl":"https://doi.org/10.1145/3301275.3302301","url":null,"abstract":"Identifying people in historical photographs is important for preserving material culture, correcting the historical record, and creating economic value, but it is also a complex and challenging task. In this paper, we focus on identifying portraits of soldiers who participated in the American Civil War (1861-65), the first widely-photographed conflict. Many thousands of these portraits survive, but only 10--20% are identified. We created Photo Sleuth, a web-based platform that combines crowdsourced human expertise and automated face recognition to support Civil War portrait identification. Our mixed-methods evaluation of Photo Sleuth one month after its public launch showed that it helped users successfully identify unknown portraits and provided a sustainable model for volunteer contribution. We also discuss implications for crowd-AI interaction and person identification pipelines.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125766682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
StoryPrint StoryPrint
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302302
K. Watson, Samuel S. Sohn, Sasha Schriber, Markus H. Gross, C. Muñiz, Mubbasir Kapadia
{"title":"StoryPrint","authors":"K. Watson, Samuel S. Sohn, Sasha Schriber, Markus H. Gross, C. Muñiz, Mubbasir Kapadia","doi":"10.1145/3301275.3302302","DOIUrl":"https://doi.org/10.1145/3301275.3302302","url":null,"abstract":"In this paper, we propose StoryPrint, an interactive visualization of creative storytelling that facilitates individual and comparative structural analyses. This visualization method is intended for script-based media, which has suitable metadata. The pre-visualization process involves parsing the script into different metadata categories and analyzing the sentiment on a character and scene basis. For each scene, the setting, character presence, character prominence, and character emotion of a film are represented as a StoryPrint. The visualization is presented as a radial diagram of concentric rings wrapped around a circular time axis. A user then has the ability to toggle a difference overlay to assist in the cross-comparison of two different scene inputs.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114779329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards rapid interactive machine learning: evaluating tradeoffs of classification without representation 迈向快速交互机器学习:评估无表示分类的权衡
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302280
Dustin L. Arendt, Emily Saldanha, Ryan Wesslen, Svitlana Volkova, Wenwen Dou
{"title":"Towards rapid interactive machine learning: evaluating tradeoffs of classification without representation","authors":"Dustin L. Arendt, Emily Saldanha, Ryan Wesslen, Svitlana Volkova, Wenwen Dou","doi":"10.1145/3301275.3302280","DOIUrl":"https://doi.org/10.1145/3301275.3302280","url":null,"abstract":"Our contribution is the design and evaluation of an interactive machine learning interface that rapidly provides the user with model feedback after every interaction. To address visual scalability, this interface communicates with the user via a \"tip of the iceberg\" approach, where the user interacts with a small set of recommended instances for each class. To address computational scalability, we developed an O(n) classification algorithm that incorporates user feedback incrementally, and without consulting the data's underlying representation matrix. Our computational evaluation showed that this algorithm has similar accuracy to several off-the-shelf classification algorithms with small amounts of labeled data. Empirical evaluation revealed that users performed better using our design compared to an equivalent active learning setup.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130104491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Learning to assess the quality of stroke rehabilitation exercises 学会评估卒中康复训练的质量
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302273
Min Hun Lee, D. Siewiorek, A. Smailagic, A. Bernardino, S. Badia
{"title":"Learning to assess the quality of stroke rehabilitation exercises","authors":"Min Hun Lee, D. Siewiorek, A. Smailagic, A. Bernardino, S. Badia","doi":"10.1145/3301275.3302273","DOIUrl":"https://doi.org/10.1145/3301275.3302273","url":null,"abstract":"Due to the limited number of therapists, task-oriented exercises are often prescribed for post-stroke survivors as in-home rehabilitation. During in-home rehabilitation, a patient may become unmotivated or confused to comply prescriptions without the feedback of a therapist. To address this challenge, this paper proposes an automated method that can achieve not only qualitative, but also quantitative assessment of stroke rehabilitation exercises. Specifically, we explored a threshold model that utilizes the outputs of binary classifiers to quantify the correctness of a movements into a performance score. We collected movements of 11 healthy subjects and 15 post-stroke survivors using a Kinect sensor and ground truth scores from primary and secondary therapists. The proposed method achieves the following agreement with the primary therapist: 0.8436, 0.8264, and 0.7976 F1-scores on three task-oriented exercises. Experimental results show that our approach performs equally well or better than multi-class classification, regression, or the evaluation of the secondary therapist. Furthermore, we found a strong correlation (R2 = 0.95) between the sum of computed exercise scores and the Fugl-Meyer Assessment scores, clinically validated motor impairment index of post-stroke survivors. Our results demonstrate a feasibility of automatically assessing stroke rehabilitation exercises with the decent agreement levels and clinical relevance.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134397091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Who should be my teammates: using a conversational agent to understand individuals and help teaming 谁应该成为我的队友:使用会话代理来理解个人并帮助团队合作
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302264
Ziang Xiao, Michelle X. Zhou, W. Fu
{"title":"Who should be my teammates: using a conversational agent to understand individuals and help teaming","authors":"Ziang Xiao, Michelle X. Zhou, W. Fu","doi":"10.1145/3301275.3302264","DOIUrl":"https://doi.org/10.1145/3301275.3302264","url":null,"abstract":"We are building an intelligent agent to help teaming efforts. In this paper, we investigate the real-world use of such an agent to understand students deeply and help student team formation in a large university class involving about 200 students and 40 teams. Specifically, the agent interacted with each student in a text-based conversation at the beginning and end of the class. We show how the intelligent agent was able to elicit in-depth information from the students, infer the students' personality traits, and reveal the complex relationships between team personality compositions and team results. We also report on the students' behavior with and impression of the agent. We discuss the benefits and limitations of such an intelligent agent in helping team formation, and the design considerations for creating intelligent agents for aiding in teaming efforts.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134582146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Innovating with AI 人工智能创新
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3308447
A. Ram
{"title":"Innovating with AI","authors":"A. Ram","doi":"10.1145/3301275.3308447","DOIUrl":"https://doi.org/10.1145/3301275.3308447","url":null,"abstract":"Google has created 8 products with over a billion users each. These products are powered by AI (artificial intelligence) at every level - from the core infrastructure and software platform to the application logic and the user interface. I'll share a behind-the-scenes look at how Google AI works and how we use it to create innovative UX (user experience) at a planetary scale. I'll end with our vision to democratize AI and how you can use Google AI in your own work.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"13 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132836841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer: a database-driven approach to generating forms for constrained interaction Transformer:一种数据库驱动的方法,用于生成受约束交互的表单
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302269
Protiva Rahman, Arnab Nandi
{"title":"Transformer: a database-driven approach to generating forms for constrained interaction","authors":"Protiva Rahman, Arnab Nandi","doi":"10.1145/3301275.3302269","DOIUrl":"https://doi.org/10.1145/3301275.3302269","url":null,"abstract":"Form-based data insertion or querying is often one of the most time-consuming steps in data-driven workflows. The small screen and lack of physical keyboard in devices such as smartphones and smartwatches introduce imprecision during user input. This can lead to data quality issues such as incomplete responses and errors, increasing user input time. We present Transformer, a system that leverages the contents of the database to automatically optimize forms for constrained input settings. Our cost function models the user input effort based on the schema and data distribution. This is used by Transformer to find the user interface (UI) widget and layout with ideal input cost for each form field. We demonstrate through user studies that Transformer provides a significantly improved user experience, with up to 50% and 57% reduction in form completion time for smartphones and smartwatches respectively.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124515182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Explaining recommendations in an interactive hybrid social recommender 在交互式混合社交推荐器中解释推荐
Proceedings of the 24th International Conference on Intelligent User Interfaces Pub Date : 2019-03-17 DOI: 10.1145/3301275.3302318
Chun-Hua Tsai, Peter Brusilovsky
{"title":"Explaining recommendations in an interactive hybrid social recommender","authors":"Chun-Hua Tsai, Peter Brusilovsky","doi":"10.1145/3301275.3302318","DOIUrl":"https://doi.org/10.1145/3301275.3302318","url":null,"abstract":"Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132179886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信