Proceedings of the 22nd International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
Towards Understanding Human Mistakes of Programming by Example: An Online User Study 通过实例来理解编程中的人为错误:一项在线用户研究
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025203
T. Lee, Casey Dugan, B. Bederson
{"title":"Towards Understanding Human Mistakes of Programming by Example: An Online User Study","authors":"T. Lee, Casey Dugan, B. Bederson","doi":"10.1145/3025171.3025203","DOIUrl":"https://doi.org/10.1145/3025171.3025203","url":null,"abstract":"Programming-by-Example (PBE) enables users to create programs without writing a line of code. However, there is little research on people's ability to accomplish complex tasks by providing examples, which is the key to successful PBE solutions. This paper presents an online user study, which reports observations on how well people decompose complex tasks, and disambiguate sub-tasks. Our findings suggest that disambiguation and decomposition are difficult for inexperienced users. We identify seven types of mistakes made, and suggest new opportunities for actionable feedback based on unsuccessful examples, with design implications for future PBE systems.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121603004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Use of Haptic Feedback to Train Correct Application of Force in Endodontic Surgery 利用触觉反馈训练牙髓手术中正确施力
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025217
Myat Su Yin, P. Haddawy, S. Suebnukarn, Holger Schultheis, P. Rhienmora
{"title":"Use of Haptic Feedback to Train Correct Application of Force in Endodontic Surgery","authors":"Myat Su Yin, P. Haddawy, S. Suebnukarn, Holger Schultheis, P. Rhienmora","doi":"10.1145/3025171.3025217","DOIUrl":"https://doi.org/10.1145/3025171.3025217","url":null,"abstract":"With the minute margins of error in endodontic surgery, training in manual dexterity and proper instrument handling are crucial components in the dental curriculum. Important parameters include tool path, tool angulation, and force applied. In this work, we focus on training of correct application of force. This is particularly challenging since the amounts of force used are on the order of tenths of Newtons, requiring a highly refined tactile sense and incorrect force can cause irreversible damage. Too great a force can cause overdrilling or in extreme cases perforation of the tooth. Too small a force can cause thermal irritation possibly resulting in tissue necrosis. Despite the importance of correct use of force, this is the dimension on which students receive the least tutorial feedback since force information is typically not available in traditional training settings. In this paper, we present an approach to using haptic feedback as a means to convey formative feedback on the correct application of force. Feedback is conveyed to the student graphically and the correct amount of force to apply is trained haptically. The simulator is rewound and the student is asked to redo the stage where the error occurred. Preliminary evaluation against a control group of students who received only feedback concerning outcome shows the feedback mechanism to be effective.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131647339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Confiding in and Listening to Virtual Agents: The Effect of Personality 信任和倾听虚拟代理人:个性的影响
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025206
Jingyi Li, Michelle X. Zhou, Huahai Yang, G. Mark
{"title":"Confiding in and Listening to Virtual Agents: The Effect of Personality","authors":"Jingyi Li, Michelle X. Zhou, Huahai Yang, G. Mark","doi":"10.1145/3025171.3025206","DOIUrl":"https://doi.org/10.1145/3025171.3025206","url":null,"abstract":"We present an intelligent virtual interviewer that engages with a user in a text-based conversation and automatically infers the user's psychological traits, such as personality. We investigate how the personality of a virtual interviewer influences a user's behavior from two perspectives: the user's willingness to confide in, and listen to, a virtual interviewer. We have developed two virtual interviewers with distinct personalities and deployed them in a real-world recruiting event. We present findings from completed interviews with 316 actual job applicants. Notably, users are more willing to confide in and listen to a virtual interviewer with a serious, assertive personality. Moreover, users' personality traits, inferred from their chat text, influence their perception of a virtual interviewer, and their willingness to confide in and listen to a virtual interviewer. Finally, we discuss the implications of our work on building hyper- personalized, intelligent agents based on user traits.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131868851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Inline Co-Evolution between Users and Information Presentation for Data Exploration 数据探索中用户与信息表示的内联协同演化
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025226
Landy Rajaonarivo, M. Courgeon, E. Maisel, P. D. Loor
{"title":"Inline Co-Evolution between Users and Information Presentation for Data Exploration","authors":"Landy Rajaonarivo, M. Courgeon, E. Maisel, P. D. Loor","doi":"10.1145/3025171.3025226","DOIUrl":"https://doi.org/10.1145/3025171.3025226","url":null,"abstract":"This paper presents an intelligent user interface model dedicated to the exploration of complex databases. This model is implemented on a 3D metaphor: a virtual museum. In this metaphor, the database elements are embodied as museum objects. The objects are grouped in rooms according to their semantic properties and relationships and the rooms organization forms the museum. Rooms? organization is not predefined but defined incrementally by taking into account not only the relationships between objects, but also the user's centers of interest. The latter are evaluated in real-time through user interactions within the virtual museum. This interface allows for a personal reading and favors the discovery of unsuspected links between data. In this paper, we present our model's formalization as well as its application to the context of cultural heritage.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130124190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Learning to Rate Clinical Concepts Using Simulated Clinician Feedback 学习使用模拟临床医生反馈评价临床概念
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025232
Mohammad Alsulmi, Ben Carterette
{"title":"Learning to Rate Clinical Concepts Using Simulated Clinician Feedback","authors":"Mohammad Alsulmi, Ben Carterette","doi":"10.1145/3025171.3025232","DOIUrl":"https://doi.org/10.1145/3025171.3025232","url":null,"abstract":"We present a user-based model for rating concepts (i.e., words and phrases) in clinical queries based on their relevance to clinical decision making. Our approach can be adopted by information retrieval systems (e.g., search engines) to identify the most important concepts in user queries in order to better understand user intent and to improve search results. In our experiments, we examine several learning algorithms and show that by using simulated user feedback, our approach can predict the ratings of the clinical concepts in newly unseen queries with high prediction accuracy.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122350488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User Trust Dynamics: An Investigation Driven by Differences in System Performance 用户信任动态:由系统性能差异驱动的调查
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025219
Kun Yu, S. Berkovsky, R. Taib, Dan Conway, Jianlong Zhou, Fang Chen
{"title":"User Trust Dynamics: An Investigation Driven by Differences in System Performance","authors":"Kun Yu, S. Berkovsky, R. Taib, Dan Conway, Jianlong Zhou, Fang Chen","doi":"10.1145/3025171.3025219","DOIUrl":"https://doi.org/10.1145/3025171.3025219","url":null,"abstract":"Trust is a key factor affecting the way people rely on automated systems. On the other hand, system performance has comprehensive implications on a user's trust variations. This paper examines systems of varied levels of accuracy, in order to reveal the relationship between system performance, a user's trust and reliance on the system. In particular, it is identified that system failures have a stronger effect on trust than system successes. We also describe how patterns of trust change according to a number of consecutive system failures or successes. Importantly, we show that increasing user familiarity with the system decreases the rate of trust change, which provides new insights on the development of user trust. Finally, our analysis established a correlation between a user's reliance on a system and their trust level. Combining all these findings can have important implications in general system design and implementation, by predicting how trust builds and when it stabilizes, as well as allowing for indirectly reading a user's trust in real time based on system reliance.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121397768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Analyza: Exploring Data with Conversation Analyza:通过对话探索数据
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025227
Kedar Dhamdhere, K. McCurley, Ralfi Nahmias, Mukund Sundararajan, Qiqi Yan
{"title":"Analyza: Exploring Data with Conversation","authors":"Kedar Dhamdhere, K. McCurley, Ralfi Nahmias, Mukund Sundararajan, Qiqi Yan","doi":"10.1145/3025171.3025227","DOIUrl":"https://doi.org/10.1145/3025171.3025227","url":null,"abstract":"We describe Analyza, a system that helps lay users explore data. Analyza has been used within two large real world systems. The first is a question-and-answer feature in a spreadsheet product. The second provides convenient access to a revenue/inventory database for a large sales force. Both user bases consist of users who do not necessarily have coding skills, demonstrating Analyza's ability to democratize access to data. We discuss the key design decisions in implementing this system. For instance, how to mix structured and natural language modalities, how to use conversation to disambiguate and simplify querying, how to rely on the ``semantics' of the data to compensate for the lack of syntactic structure, and how to efficiently curate the data.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116165834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Intelligent Sensory Modality Selection for Electronic Supportive Devices 电子辅助设备的智能感官模式选择
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025228
Kyle Kotowick, J. Shah
{"title":"Intelligent Sensory Modality Selection for Electronic Supportive Devices","authors":"Kyle Kotowick, J. Shah","doi":"10.1145/3025171.3025228","DOIUrl":"https://doi.org/10.1145/3025171.3025228","url":null,"abstract":"Humans operating in stressful environments, such as in military or emergency first-responder roles, are subject to high sensory input loads and must often switch their attention between different modalities. Conventional supportive devices that assist users in such situations typically provide information using a single, static sensory modality; however, this carries the risk of overload when the modalities for the primary task and the supportive device overlap. Effective feedback modality selection is essential in order to avoid such a risk. One potential method for accomplishing this is to intelligently select the supportive device's feedback modality based on the user's environment and given task; however, this may result in delayed or lost information due to the performance cost resulting from switching attention from one modality to another. This paper describes the design and results of a human-participant study designed to evaluate the benefits and risks of various intelligent modality-selection strategies. Our findings suggest complex interactions between strategies, sensory input load levels and feedback modalities, with numerous significant effects across many different performance metrics.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128171667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SIRUP: Serendipity In Recommendations via User Perceptions SIRUP:基于用户感知的推荐中的意外发现
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025185
Valentina Maccatrozzo, Manon Terstall, Lora Aroyo, G. Schreiber
{"title":"SIRUP: Serendipity In Recommendations via User Perceptions","authors":"Valentina Maccatrozzo, Manon Terstall, Lora Aroyo, G. Schreiber","doi":"10.1145/3025171.3025185","DOIUrl":"https://doi.org/10.1145/3025171.3025185","url":null,"abstract":"In this paper, we propose a model to operationalise serendipity in content-based recommender systems. The model, called SIRUP, is inspired by the Silvia's curiosity theory, based on the fundamental theory of Berlyne, aims at (1) measuring the novelty of an item with respect to the user profile, and (2) assessing whether the user is able to manage such level of novelty (coping potential). The novelty of items is calculated with cosine similarities between items, using Linked Open Data paths. The coping potential of users is estimated by measuring the diversity of the items in the user profile. We deployed and evaluated the SIRUP model in a use case with TV recommender using BBC programs dataset. Results show that the SIRUP model allows us to identify serendipitous recommendations, and, at the same time, to have 71% precision.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125983333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Understanding Emotional Responses to Mobile Video Advertisements via Physiological Signal Sensing and Facial Expression Analysis 基于生理信号感知和面部表情分析的手机视频广告情绪反应研究
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025186
Phuong Pham, Jingtao Wang
{"title":"Understanding Emotional Responses to Mobile Video Advertisements via Physiological Signal Sensing and Facial Expression Analysis","authors":"Phuong Pham, Jingtao Wang","doi":"10.1145/3025171.3025186","DOIUrl":"https://doi.org/10.1145/3025171.3025186","url":null,"abstract":"Understanding a target audience's emotional responses to video advertisements is crucial to stakeholders. However, traditional methods for collecting such information are slow, expensive, and coarse-grained. We propose AttentiveVideo, an intelligent mobile interface with corresponding inference algorithms to monitor and quantify the effects of mobile video advertising. AttentiveVideo employs a combination of implicit photoplethysmography (PPG) sensing and facial expression analysis (FEA) to predict viewers' attention, engagement, and sentiment when watching video advertisements on unmodified smartphones. In a 24-participant study, we found that AttentiveVideo achieved good accuracies on a wide range of emotional measures (the best average accuracy = 73.59%, kappa = 0.46 across 9 metrics). We also found that the PPG sensing channel and the FEA technique are complimentary. While FEA works better for strong emotions (e.g., joy and anger), the PPG channel is more informative for subtle responses or emotions. These findings show the potential for both low-cost collection and deep understanding of emotional responses to mobile video advertisements.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131566674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信