{"title":"PATI","authors":"Yuxiang Gao, Chien-Ming Huang","doi":"10.1145/3301275.3302326","DOIUrl":"https://doi.org/10.1145/3301275.3302326","url":null,"abstract":"As robots begin to provide daily assistance to individuals in human environments, their end-users, who do not necessarily have substantial technical training or backgrounds in robotics or programming, will ultimately need to program and \"re-task\" their robots to perform a variety of custom tasks. In this work, we present PATI---a Projection-based Augmented Table-top Interface for robot programming---through which users are able to use simple, common gestures (e.g., pinch gestures) and tools (e.g., shape tools) to specify table-top manipulation tasks (e.g., pick-and-place) for a robot manipulator. PATI allows users to interact with the environment directly when providing task specifications; for example, users can utilize gestures and tools to annotate the environment with task-relevant information, such as specifying target landmarks and selecting objects of interest. We conducted a user study to compare PATI with a state-of-the-art, standard industrial method for end-user robot programming. Our results show that participants needed significantly less training time before they felt confident in using our system than they did for the industrial method. Moreover, participants were able to program a robot manipulator to complete a pick-and-place task significantly faster with PATI. This work indicates a new direction for end-user robot programming.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115181888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the feasibility of finger identification on capacitive touchscreens using deep learning","authors":"Huy Viet Le, Sven Mayer, N. Henze","doi":"10.1145/3301275.3302295","DOIUrl":"https://doi.org/10.1145/3301275.3302295","url":null,"abstract":"Touchscreens enable intuitive mobile interaction. However, touch input is limited to 2D touch locations which makes it challenging to provide shortcuts and secondary actions similar to hardware keyboards and mice. Previous work presented a wide range of approaches to provide secondary actions by identifying which finger touched the display. While these approaches are based on external sensors which are inconvenient, we use capacitive images from mobile touchscreens to investigate the feasibility of finger identification. We collected a dataset of low-resolution fingerprints and trained convolutional neural networks that classify touches from eight combinations of fingers. We focused on combinations that involve the thumb and index finger as these are mainly used for interaction. As a result, we achieved an accuracy of over 92% for a position-invariant differentiation between left and right thumbs. We evaluated the model and two use cases that users find useful and intuitive. We publicly share our data set (CapFingerld) comprising 455,709 capacitive images of touches from each finger on a representative mutual capacitive touchscreen and our models to enable future work using and improving them.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127593553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of user differences in customization: a case study in personalization for infovis-based content","authors":"Sébastien Lallé, C. Conati","doi":"10.1145/3301275.3302283","DOIUrl":"https://doi.org/10.1145/3301275.3302283","url":null,"abstract":"Although there is extensive evidence that personalization of interactive systems can improve the user's experience and satisfaction, it is also known that the two main approaches to deliver personalization, namely via customization or system-driven adaptation, have limitations. In particular, many users do not use customize mechanisms, while adaptation can be perceived as intrusive and opaque. In this paper, we explore an intermediary approach to personalization, namely delivering system-driven support to customization. To this end, we study a customization mechanism allowing to choose the type and amount of information displayed by means of information visualizations in a system for decision making, and examine the impact of user differences on the effectiveness of this mechanism. Our results show that, for the users who did use the customization mechanism, customization effectiveness was impacted by their levels of visualization literacy and locus of control. These results suggest that the customization mechanism could be improved by system-driven assistance to customize depending on the user's level of visualization literacy and locus of control.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116440854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inferencing underspecified natural language utterances in visual analysis","authors":"V. Setlur, Melanie Tory, Alex Djalali","doi":"10.1145/3301275.3302270","DOIUrl":"https://doi.org/10.1145/3301275.3302270","url":null,"abstract":"Handling ambiguity and underspecification of users' utterances is challenging, particularly for natural language interfaces that help with visual analytical tasks. Constraints in the underlying analytical platform and the users' expectations of high precision and recall require thoughtful inferencing to help generate useful responses. In this paper, we introduce a system to resolve partial utterances based on syntactic and semantic constraints of the underlying analytical expressions. We extend inferencing based on best practices in information visualization to generate useful visualization responses. We employ heuristics to help constrain the solution space of possible inferences, and apply ranking logic to the interpretations based on relevancy. We evaluate the quality of inferred interpretations based on relevancy and analytical usefulness.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exemplar based experience transfer","authors":"Paridhi Maheshwari, Nitish Bansal, Surya Dwivedi, Rohan Kumar, Pranav Maneriker, Balaji Vasan Srinivasan","doi":"10.1145/3301275.3302300","DOIUrl":"https://doi.org/10.1145/3301275.3302300","url":null,"abstract":"Banners are present in several forms and a person might be inspired by one or more of these. However, designing banners is a non-trivial task, especially for novices. Starting from a blank canvas can often be overwhelming, and exploring alternatives is time-consuming. In this paper, we propose an automatic approach to transfer a novice user's content into an example banner. Our algorithm begins with extracting the template of the example banner via a semantic segmentation approach. This is followed by an energy-based optimization framework to combine multiple design elements and arrive at an optimal layout. A crowd-sourced experiment comparing our automatic results against banners designed by creative professionals indicates the viability of the proposed work.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126991292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supporting job mediator and job seeker through an actionable dashboard","authors":"Sven Charleer, Francisco Gutiérrez, K. Verbert","doi":"10.1145/3301275.3302312","DOIUrl":"https://doi.org/10.1145/3301275.3302312","url":null,"abstract":"Job mediation services can assist job seekers in finding suitable employment through a personalised approach. Consultation or mediation sessions, supported by personal profile data of the job seeker, help job mediators understand personal situation and requests. Prediction and recommendation systems can directly provide job seekers with possible job vacancies. However, incorrect or unrealistic suggestions, and bad interpretations can result in bad decisions or demotivation of the job seeker. This paper explores how an interactive dashboard visualising prediction and recommendation output can help support the dialogue between job mediator and job seeker, by increasing the \"explainability\" and providing mediators with control over the information that is shown to job seekers.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128043429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Lee, Himel Dev, Huizi Hu, Hazem Elmeleegy, Aditya G. Parameswaran
{"title":"Avoiding drill-down fallacies with VisPilot: assisted exploration of data subsets","authors":"D. Lee, Himel Dev, Huizi Hu, Hazem Elmeleegy, Aditya G. Parameswaran","doi":"10.1145/3301275.3302307","DOIUrl":"https://doi.org/10.1145/3301275.3302307","url":null,"abstract":"As datasets continue to grow in size and complexity, exploring multi-dimensional datasets remain challenging for analysts. A common operation during this exploration is drill-down-understanding the behavior of data subsets by progressively adding filters. While widely used, in the absence of careful attention towards confounding factors, drill-downs could lead to inductive fallacies. Specifically, an analyst may end up being \"deceived\" into thinking that a deviation in trend is attributable to a local change, when in fact it is a more general phenomenon; we term this the drill-down fallacy. One way to avoid falling prey to drill-down fallacies is to exhaustively explore all potential drill-down paths, which quickly becomes infeasible on complex datasets with many attributes. We present VisPilot, an accelerated visual data exploration tool that guides analysts through the key insights in a dataset, while avoiding drill-down fallacies. Our user study results show that VisPilot helps analysts discover interesting visualizations, understand attribute importance, and predict unseen visualizations better than other multidimensional data analysis baselines.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134322910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To explain or not to explain: the effects of personal characteristics when explaining music recommendations","authors":"Martijn Millecamp, N. Htun, C. Conati, K. Verbert","doi":"10.1145/3301275.3302313","DOIUrl":"https://doi.org/10.1145/3301275.3302313","url":null,"abstract":"Recommender systems have been increasingly used in online services that we consume daily, such as Facebook, Netflix, YouTube, and Spotify. However, these systems are often presented to users as a \"black box\", i.e. the rationale for providing individual recommendations remains unexplained to users. In recent years, various attempts have been made to address this black box issue by providing textual explanations or interactive visualisations that enable users to explore the provenance of recommendations. Among other things, results demonstrated benefits in terms of precision and user satisfaction. Previous research had also indicated that personal characteristics such as domain knowledge, trust propensity and persistence may also play an important role on such perceived benefits. Yet, to date, little is known about the effects of personal characteristics on explaining recommendations. To address this gap, we developed a music recommender system with explanations and conducted an online study using a within-subject design. We captured various personal characteristics of participants and administered both qualitative and quantitative evaluation methods. Results indicate that personal characteristics have significant influence on the interaction and perception of recommender systems, and that this influence changes by adding explanations. For people with a low need for cognition are the explained recommendations the most beneficial. For people with a high need for cognition, we observed that explanations could create a lack of confidence. Based on these results, we present some design implications for explaining recommendations.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131341195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arjun Srinivasan, Mira Dontcheva, Eytan Adar, Seth Walker
{"title":"Discovering natural language commands in multimodal interfaces","authors":"Arjun Srinivasan, Mira Dontcheva, Eytan Adar, Seth Walker","doi":"10.1145/3301275.3302292","DOIUrl":"https://doi.org/10.1145/3301275.3302292","url":null,"abstract":"Discovering what to say and how to say it remains a challenge for users of multimodal interfaces supporting speech input. Users end up \"guessing\" commands that a system might support, often leading to interpretation errors and frustration. One solution to this problem is to display contextually relevant command examples as users interact with a system. The challenge, however, is deciding when, how, and which examples to recommend. In this work, we describe an approach for generating and ranking natural language command examples in multimodal interfaces. We demonstrate the approach using a prototype touch- and speech-based image editing tool. We experiment with augmentations of the UI to understand when and how to present command examples. Through an online user study, we evaluate these alternatives and find that in-situ command suggestions promote discovery and encourage the use of speech input.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"40 1-8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133074419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MyoSign","authors":"Qian Zhang, Dong Wang, Run Zhao, Yinggang Yu","doi":"10.1145/3301275.3302296","DOIUrl":"https://doi.org/10.1145/3301275.3302296","url":null,"abstract":"Automatic sign language recognition is an important milestone in facilitating the communication between the deaf community and hearing people. Existing approaches are either intrusive or susceptible to ambient environments and user diversity. Moreover, most of them perform only isolated word recognition, not sentence-level sequence translation. In this paper, we present MyoSign, a deep learning based system that enables end-to-end American Sign Language (ASL) recognition at both word and sentence levels. We leverage a lightweight wearable device which can provide inertial and electromyography signals to non-intrusively capture signs. First, we propose a multimodal Convolutional Neural Network (CNN) to abstract representations from inputs of different sensory modalities. Then, a bidirectional Long Short Term Memory (LSTM) is exploited to model temporal dependences. On the top of the networks, we employ Connectionist Temporal Classification (CTC) to get around temporal segments and achieve end-to-end continuous sign language recognition. We evaluate MyoSign on 70 commonly used ASL words and 100 ASL sentences from 15 volunteers. Our system achieves an average accuracy of 93.7% at word-level and 93.1% at sentence-level in user-independent settings. In addition, MyoSign can recognize sentences unseen in the training set with 92.4% accuracy. The encouraging results indicate that MyoSign can be a meaningful buildup in the advancement of sign language recognition.","PeriodicalId":153096,"journal":{"name":"Proceedings of the 24th International Conference on Intelligent User Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115425672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}