Renate Häuslschmid, Max von Bülow, Bastian Pfleging, A. Butz
{"title":"SupportingTrust in Autonomous Driving","authors":"Renate Häuslschmid, Max von Bülow, Bastian Pfleging, A. Butz","doi":"10.1145/3025171.3025198","DOIUrl":"https://doi.org/10.1145/3025171.3025198","url":null,"abstract":"Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car's interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car's indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124435966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Srujana Gattupalli, Dylan Ebert, Michalis Papakostas, F. Makedon, V. Athitsos
{"title":"CogniLearn","authors":"Srujana Gattupalli, Dylan Ebert, Michalis Papakostas, F. Makedon, V. Athitsos","doi":"10.1145/3025171.3025213","DOIUrl":"https://doi.org/10.1145/3025171.3025213","url":null,"abstract":"This paper proposes a novel system for assessing physical exercises specifically designed for cognitive behavior monitoring. The proposed system provides decision support to experts for helping with early childhood development. Our work is based on the well-established framework of Head-Toes-Knees-Shoulders (HTKS) that is known for its sufficient psychometric properties and its ability to assess cognitive dysfunctions. HTKS serves as a useful measure for behavioral self-regulation. Our system, CogniLearn, automates capturing and motion analysis of users performing the HTKS game and provides detailed evaluations using state-of-the-art computer vision and deep learning based techniques for activity recognition and evaluation. The proposed system is supported by an intuitive and specifically designed user interface that can help human experts to cross-validate and/or refine their diagnosis. To evaluate our system, we created a novel dataset, that we made open to the public to encourage further experimentation. The dataset consists of 15 subjects performing 4 different variations of the HTKS task and contains in total more than 60,000 RGB frames, of which 4,443 are fully annotated.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117120272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harold Soh, S. Sanner, Madeleine White, G. Jamieson
{"title":"Deep Sequential Recommendation for Personalized Adaptive User Interfaces","authors":"Harold Soh, S. Sanner, Madeleine White, G. Jamieson","doi":"10.1145/3025171.3025207","DOIUrl":"https://doi.org/10.1145/3025171.3025207","url":null,"abstract":"Adaptive user-interfaces (AUIs) can enhance the usability of complex software by providing real-time contextual adaptation and assistance. Ideally, AUIs should be personalized and versatile, i.e., able to adapt to each user who may perform a variety of complex tasks. But this is difficult to achieve with many interaction elements when data-per-user is sparse. In this paper, we propose an architecture for personalized AUIs that leverages upon developments in (1) deep learning, particularly gated recurrent units, to efficiently learn user interaction patterns, (2) collaborative filtering techniques that enable sharing of data among users, and (3) fast approximate nearest-neighbor methods in Euclidean spaces for quick UI control and/or content recommendations. Specifically, interaction histories are embedded in a learned space along with users and interaction elements; this allows the AUI to query and recommend likely next actions based on similar usage patterns across the user base. In a comparative evaluation on user-interface, web-browsing and e-learning datasets, the deep recurrent neural-network (DRNN) outperforms state-of-the-art tensor-factorization and metric embedding methods.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121392088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Thomason, P. Ratsamee, K. Kiyokawa, Pakpoom Kriangkomol, J. Orlosky, T. Mashita, Yuuki Uranishi, H. Takemura
{"title":"Adaptive View Management for Drone Teleoperation in Complex 3D Structures","authors":"J. Thomason, P. Ratsamee, K. Kiyokawa, Pakpoom Kriangkomol, J. Orlosky, T. Mashita, Yuuki Uranishi, H. Takemura","doi":"10.1145/3025171.3025179","DOIUrl":"https://doi.org/10.1145/3025171.3025179","url":null,"abstract":"Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drone's current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121659227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, S. Gupta
{"title":"DyFAV: Dynamic Feature Selection and Voting for Real-time Recognition of Fingerspelled Alphabet using Wearables","authors":"Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, S. Gupta","doi":"10.1145/3025171.3025216","DOIUrl":"https://doi.org/10.1145/3025171.3025216","url":null,"abstract":"Recent research has shown that reliable recognition of sign language words and phrases using user-friendly and non-invasive armbands is feasible and desirable. This work provides an analysis and implementation of including fingerspelling recognition (FR) in such systems, which is a much harder problem due to lack of distinctive hand movements. A novel algorithm called DyFAV (Dynamic Feature Selection and Voting) is proposed for this purpose that exploits the fact that fingerspelling has a finite corpus (26 letters for ASL). The system uses an independent multiple agent voting approach to identify letters with high accuracy. The independent voting of the agents ensures that the algorithm is highly parallelizable and thus recognition times can be kept low to suit real-time mobile applications. The results are demonstrated on the entire ASL alphabet corpus for nine people with limited training and average recognition accuracy of 95.36% is achieved which is better than the state-of-art for armband sensors. The mobile, non-invasive, and real time nature of the technology is demonstrated by evaluating performance on various types of Android phones and remote server configurations.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126326955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shereen Oraby, Pritam Gundecha, J. Mahmud, Mansurul Bhuiyan, R. Akkiraju
{"title":"\"How May I Help You?\": Modeling Twitter Customer ServiceConversations Using Fine-Grained Dialogue Acts","authors":"Shereen Oraby, Pritam Gundecha, J. Mahmud, Mansurul Bhuiyan, R. Akkiraju","doi":"10.1145/3025171.3025191","DOIUrl":"https://doi.org/10.1145/3025171.3025191","url":null,"abstract":"Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained \"dialogue acts\" frequently observed in customer service, showcasing acts that are more suited to the domain than the more generic existing taxonomies. Using a sequential SVM-HMM model, we model conversation flow, predicting the dialogue act of a given turn in real-time. We characterize differences between customer and agent behavior in Twitter customer service conversations, and investigate the effect of testing our system on different customer service industries. Finally, we use a data-driven approach to predict important conversation outcomes: customer satisfaction, customer frustration, and overall problem resolution. We show that the type and location of certain dialogue acts in a conversation have a significant effect on the probability of desirable and undesirable outcomes, and present actionable rules based on our findings. The patterns and rules we derive can be used as guidelines for outcome-driven automated customer service platforms.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126017526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interaction Design for Rehabiliation","authors":"P. Markopoulos","doi":"10.1145/3025171.3026365","DOIUrl":"https://doi.org/10.1145/3025171.3026365","url":null,"abstract":"Well-known trends pertaining to the aging of population and the rising costs of healthcare motivate the development of rehabilitation technology. There is a considerable body of work in this area including efforts to make serious games, virtual reality and robotic applications. While innovative technologies have been introduced over the years, and often researchers produce promising experimental results, these technologies have not yet delivered the anticipated benefits. The causes for this apparent failure are evident when looking a closer look at the case of stroke rehabilitation, which is one of the heaviest researched topics for developing rehabilitation technologies. It is argued that improvements should be sought by centering the design on an understanding of patient needs, allowing patients, therapists and care givers in general to personalize solutions to the need of patients, effective feedback and motivation strategies to be implemented, and an in depth understanding of the socio-technical system in which the rehabilitation technology will be embedded. These are classic challenges that human computer interaction (HCI) researchers have been dealing with for years, which is why the field of rehabilitation technology requires considerable input from HCI researchers, and which explains the growing number of relevant HCI publications pertaining to rehabilitation. The talk reviews related research carried out at the Eindhoven University of Technology together with collaborating institutes, which has examined the value of tangible user interfaces and embodied interaction in rehabilitation, how designing playful interactions or games with a functional purpose., feedback design. I shall discuss the work we have done to develop rehabilitation technologies for the TagTrrainer system in the doctoral research of Daniel Tetteroo [2,3,4] and the explorations on wearable solutions in the doctoral research of Wang Qi.[5,6]. With our research being design driven and explorative, I will discuss also the current state of the art for the field and the challenges that need to be addressed for human computer interaction research to make a larger impact in the domain of rehabilitation technology.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128200788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enamul Hoque, Shafiq R. Joty, Luis Marquez, G. Carenini
{"title":"CQAVis: Visual Text Analytics for Community Question Answering","authors":"Enamul Hoque, Shafiq R. Joty, Luis Marquez, G. Carenini","doi":"10.1145/3025171.3025210","DOIUrl":"https://doi.org/10.1145/3025171.3025210","url":null,"abstract":"Community question answering (CQA) forums can provide effective means for sharing information and addressing a user's information needs about particular topics. However, many such online forums are not moderated, resulting in many low quality and redundant comments, which makes it very challenging for users to find the appropriate answers to their questions. In this paper, we apply a user-centered design approach to develop a system, CQAVis, which supports users in identifying high quality comments and get their questions answered. Informed by the user's requirements, the system combines both text analytics and interactive visualization techniques together in a synergistic way. Given a new question posed by the user, the text analytic module automatically finds relevant answers by exploring existing related questions and the comments within their threads. Then the visualization module presents the search results to the user and supports the exploration of related comments. We have evaluated the system in the wild by deploying it within a CQA forum among thousands of real users. Through the online study, we gained deeper insights about the potential utility of the system, as well as learned generalizable lessons for designing visual text analytics systems for the domain of CQA forums.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133153075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chun-Fu Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joe W. Ligman, Y. Wang
{"title":"UI X-Ray: Interactive Mobile UI Testing Based on Computer Vision","authors":"Chun-Fu Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joe W. Ligman, Y. Wang","doi":"10.1145/3025171.3025190","DOIUrl":"https://doi.org/10.1145/3025171.3025190","url":null,"abstract":"User Interface/eXperience (UI/UX) significantly affects the lifetime of any software program, particularly mobile apps. A bad UX can undermine the success of a mobile app even if that app enables sophisticated capabilities. A good UX, however, needs to be supported of a highly functional and user friendly UI design. In spite of the importance of building mobile apps based on solid UI designs, UI discrepancies---inconsistencies between UI design and implementation---are among the most numerous and expensive defects encountered during testing. This paper presents UI X-Ray, an interactive UI testing system that integrates computer-vision methods to facilitate the correction of UI discrepancies---such as inconsistent positions, sizes and colors of objects and fonts. Using UI X-Ray does not require any programming experience; therefore, UI X-Ray can be used even by non-programmers---particularly designers---which significantly reduces the overhead involved in writing tests. With the feature of interactive interface, UI testers can quickly generate defect reports and revision instructions---which would otherwise be done manually. We verified our UI X-Ray on 4 developed mobile apps of which the entire development history was saved. UI X-Ray achieved a 99.03% true-positive rate, which significantly surpassed the 20.92% true-positive rate obtained via manual analysis. Furthermore, evaluating the results of our automated analysis can be completed quickly (< 1 minute per view on average) compared to hours of manual work required by UI testers. On the other hand, UI X-Ray received the appreciations from skilled designers and UI X-Ray improves their current work flow to generate UI defect reports and revision instructions. The proposed system, UI X-Ray, presented in this paper has recently become part of a commercial product.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130554670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modern Touchscreen Keyboards as Intelligent User Interfaces: A Research Review","authors":"Shumin Zhai","doi":"10.1145/3025171.3026367","DOIUrl":"https://doi.org/10.1145/3025171.3026367","url":null,"abstract":"Essential to mobile communication, the touchscreen keyboard is the most ubiquitous intelligent user interface on modern mobile phones. Developing smarter, more efficient, easy to learn, and fun to use keyboards has presented many fascinating IUI research and design questions. Some have been addressed by academic research and practitioners in industry, while others remain significant ongoing research challenges. In this IUI 2017 keynote address I will review and synthesize the progress and open research questions of the past 15 years in text input, focusing on those my co-authors and I have directly dealt with through publications, such as the cost-benefit equations of automation and prediction [9], the power of machine/statistical intelligence [4, 7, 12], the human performance models fundamental to the design of error-correction algorithms [1, 2, 8], spatial scaling from a phone to a watch and the implications on human-machine labor division [5], user behavior and learning innovation [7, 11, 12, 13], and the challenges of evaluating the longitudinal effects of personalization and adaptation [4]. Through this research program review, I will illustrate why intelligent user interfaces, or the combination of machine intelligence and human factors, holds the future of human-computer interaction, and information technology at large.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126501607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}