{"title":"Proceedings of the 24th International Conference on Intelligent User Interfaces","authors":"","doi":"10.1145/3301275","DOIUrl":"https://doi.org/10.1145/3301275","url":null,"abstract":"","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114512645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin D. Weisz, Mohit Jain, N. Joshi, James Johnson, Ingrid Lange
{"title":"BigBlueBot","authors":"Justin D. Weisz, Mohit Jain, N. Joshi, James Johnson, Ingrid Lange","doi":"10.1145/3301275.3302290","DOIUrl":"https://doi.org/10.1145/3301275.3302290","url":null,"abstract":"Chatbots are becoming quite popular, with many brands developing conversational experiences using platforms such as IBM's Watson Assistant and Facebook Messenger. However, previous research reveals that users' expectations of what conversational agents can understand and do far outpace their actual technical capabilities. Our work seeks to bridge the gap between these expectations and reality by designing a fun learning experience with several goals: explaining how chatbots work by mapping utterances to a set of intents, teaching strategies for avoiding conversational breakdowns, and increasing desire to use chatbots by creating feelings of empathy toward them. Our experience, called BigBlueBot, consists of interactions with two chatbots in which breakdowns occur and the user (or chatbot) must recover using one or more repair strategies. In a Mechanical Turk evaluation (N=88), participants learned strategies for having successful human-agent interactions, reported feelings of empathy toward the chatbots, and expressed a desire to interact with chatbots in the future.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122156457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktor Schlegel, Benedikt Lang, Siegfried Handschuh, A. Freitas
{"title":"Vajra","authors":"Viktor Schlegel, Benedikt Lang, Siegfried Handschuh, A. Freitas","doi":"10.1145/3301275.3302267","DOIUrl":"https://doi.org/10.1145/3301275.3302267","url":null,"abstract":"Building natural language programming systems that are geared towards end-users requires the abstraction of formalisms inherently introduced by programming languages, capturing the intent of natural language inputs and mapping it to existing programming language constructs. We present a novel end-user programming paradigm for Python, which maps natural language commands into Python code. The proposed semantic parsing model aims to reduce the barriers for producing well-formed code (syntactic gap) and for exploring third-party APIs (lexico-semantic gap). The proposed method was implemented in a supporting system and evaluated in a usability study involving programmers as well as non-programmers. The results show that both groups are able to produce code with or without prior programming experience.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123853023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An intelligent assistant for mediation analysis in visual analytics","authors":"Chi-Hsien Yen, Yu-Chun Yen, W. Fu","doi":"10.1145/3301275.3302325","DOIUrl":"https://doi.org/10.1145/3301275.3302325","url":null,"abstract":"Mediation analysis is commonly performed using regressions or Bayesian network analysis in statistics, psychology, and health science; however, it is not effectively supported in existing visualization tools. The lack of assistance poses great risks when people use visualizations to explore causal relationships and make data-driven decisions, as spurious correlations or seemingly conflicting visual patterns might occur. In this paper, we focused on the causal reasoning task over three variables and investigated how an interface could help users reason more efficiently. We developed an interface that facilitates two processes involved in causal reasoning: 1) detecting inconsistent trends, which guides users' attention to important visual evidence, and 2) interpreting visualizations, by providing assisting visual cues and allowing users to compare key visualizations side by side. Our preliminary study showed that the features are potentially beneficial. We discuss design implications and how the features could be generalized for more complex causal analysis.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129888433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flux capacitors for JavaScript deloreans: approximate caching for physics-based data interaction","authors":"M. Khan, Arnab Nandi","doi":"10.1145/3301275.3302291","DOIUrl":"https://doi.org/10.1145/3301275.3302291","url":null,"abstract":"Interactive visualizations have become an effective and pervasive mode of allowing users to explore the data in a visual, fluid, and immersive manner. While modern web, mobile, touch, and gesturedriven next-generation interfaces such as Leap Motion allow for highly interactive experiences, they pose unique and unprecedented workloads to the underlying data platform. Usually, these visualizations do not need precise results for most queries generated during an interaction, and the users require the intermediate results as feedback only to guide them towards their goal query. We present a middleware component - Flux Capacitor, that insulates the backend from bursty and query-intensive workloads. Flux Capacitor uses prefetching and caching strategies devised by exploiting the inherent physics-metaphor of UI widgets such as friction and inertia in range sliders, and typical characteristics of user-interaction. This enables low interaction response times while intelligently trading off accuracy","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126000513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DARPA's explainable artificial intelligence (XAI) program","authors":"David Gunning","doi":"10.1145/3301275.3308446","DOIUrl":"https://doi.org/10.1145/3301275.3308446","url":null,"abstract":"The DARPA's Explainable Artificial Intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. This talk will summarize the XAI program and present highlights from these Phase 1 evaluations.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133549307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles P. Hewitt, I. Politis, Theocharis Amanatidis, Advait Sarkar
{"title":"Assessing public perception of self-driving cars: the autonomous vehicle acceptance model","authors":"Charles P. Hewitt, I. Politis, Theocharis Amanatidis, Advait Sarkar","doi":"10.1145/3301275.3302268","DOIUrl":"https://doi.org/10.1145/3301275.3302268","url":null,"abstract":"We introduce the Autonomous Vehicle Acceptance Model (AVAM), a model of user acceptance for autonomous vehicles, adapted from existing models of user acceptance for generic technologies. A 26-item questionnaire is developed in accordance with the model and a survey conducted to evaluate 6 autonomy scenarios. In a pilot survey (n = 54) and follow-up survey (n = 187), the AVAM presented good internal consistency and replicated patterns from previous surveys. Results showed that users were less accepting of high autonomy levels and displayed significantly lower intention to use highly autonomous vehicles. We also assess expected driving engagement of hands, feet and eyes which are shown to be lower for full autonomy compared with all other autonomy levels. This highlighted that partial autonomy, regardless of level, is perceived to require uniformly higher driver engagement than full autonomy. These results can inform experts regarding public perception of autonomy across SAE levels. The AVAM and associated questionnaire enable standardised evaluation of AVs across studies, allowing for meaningful assessment of changes in perception over time and between different technologies.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133411789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligently recommending key bindings on physical keyboards with demonstrations in Emacs","authors":"Shudan Zhong, Hong Xu","doi":"10.1145/3301275.3302272","DOIUrl":"https://doi.org/10.1145/3301275.3302272","url":null,"abstract":"Physical keyboards have been peripheral input devices to electronic computers since early 1970s and become ubiquitous during the past few decades, especially in professional areas such as software programming, professional game playing, and document processing. In these real-world applications, key bindings, a fundamental vehicle for human to interact with software systems using physical keyboards, play a critical role in users' productivity. However, as essential applications of artificial intelligence research, research on intelligent user interfaces and recommender systems barely relates to key bindings on physical keyboards. In this paper, we develop a recommender system (referred to as EKBRS) for intelligently recommending key bindings with demonstration in Emacs, which we use as a base user interface. This is a brand new direction of intelligent user interface research and also a novel application of recommender systems. To the best of our knowledge, this is the world's first intelligent user interface that heavily exploits key bindings of physical keyboards and the world's first recommender system for recommending key bindings. We empirically show the effectiveness of our recommender system and briefly discuss the applicability of this recommender system to other software systems.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131808221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Gil, James Honaker, Shikhar Gupta, Yibo Ma, Vito D'Orazio, D. Garijo, S. Gadewar, Qifan Yang, N. Jahanshad
{"title":"Towards human-guided machine learning","authors":"Y. Gil, James Honaker, Shikhar Gupta, Yibo Ma, Vito D'Orazio, D. Garijo, S. Gadewar, Qifan Yang, N. Jahanshad","doi":"10.1145/3301275.3302324","DOIUrl":"https://doi.org/10.1145/3301275.3302324","url":null,"abstract":"Automated Machine Learning (AutoML) systems are emerging that automatically search for possible solutions from a large space of possible kinds of models. Although fully automated machine learning is appropriate for many applications, users often have knowledge that supplements and constraints the available data and solutions. This paper proposes human-guided machine learning (HGML) as a hybrid approach where a user interacts with an AutoML system and tasks it to explore different problem settings that reflect the user's knowledge about the data available. We present: 1) a task analysis of HGML that shows the tasks that a user would want to carry out, 2) a characterization of two scientific publications, one in neuroscience and one in political science, in terms of how the authors would search for solutions using an AutoML system, 3) requirements for HGML based on those characterizations, and 4) an assessment of existing AutoML systems in terms of those requirements.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121394067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Empathic dialogue system based on emotions extracted from tweets","authors":"Shunichi Tahara, K. Ikeda, K. Hoashi","doi":"10.1145/3301275.3302281","DOIUrl":"https://doi.org/10.1145/3301275.3302281","url":null,"abstract":"Empathic conversations have increasingly been important for dialogue systems to improve the users' experience, and increase their engagement with the system, which is difficult for many existing monotonous systems. Existing empathic dialogue systems are designed for limited domain dialogues. They respond fixed phrases toward observed user emotions. In open domain conversations, however, generating empathic responses for a wide variety of topics is required. In this paper, we draw on psychological studies about empathy, and propose an empathic dialogue system in open domain conversations. The proposed system generates empathic utterances based on observed emotions in user utterances, thus is able to build empathy with users. Our experiments have proven that users were able to feel more empathy from the proposed system, especially when their emotions were explicitly expressed in their utterances.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117036456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}