Malin Eiband, H. Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, H. Hussmann
{"title":"Bringing Transparency Design into Practice","authors":"Malin Eiband, H. Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, H. Hussmann","doi":"10.1145/3172944.3172961","DOIUrl":"https://doi.org/10.1145/3172944.3172961","url":null,"abstract":"Intelligent systems, which are on their way to becoming mainstream in everyday products, make recommendations and decisions for users based on complex computations. Researchers and policy makers increasingly raise concerns regarding the lack of transparency and comprehensibility of these computations from the user perspective. Our aim is to advance existing UI guidelines for more transparency in complex real-world design scenarios involving multiple stakeholders. To this end, we contribute a stage-based participatory process for designing transparent interfaces incorporating perspectives of users, designers, and providers, which we developed and validated with a commercial intelligent fitness coach. With our work, we hope to provide guidance to practitioners and to pave the way for a pragmatic approach to transparency in intelligent systems.","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 1B: Multimodal Interfaces","authors":"O. Mokryn","doi":"10.1145/3247906","DOIUrl":"https://doi.org/10.1145/3247906","url":null,"abstract":"","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Doctoral Consortium","authors":"J. Orlosky, Naomi Yamashita","doi":"10.1145/3247917","DOIUrl":"https://doi.org/10.1145/3247917","url":null,"abstract":"","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123373347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 6B: Social Media and Reccomenders","authors":"T. Itoh","doi":"10.1145/3247916","DOIUrl":"https://doi.org/10.1145/3247916","url":null,"abstract":"","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"35 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114111539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 6A: IUIs for Complex Tasks","authors":"Alison Smith","doi":"10.1145/3247915","DOIUrl":"https://doi.org/10.1145/3247915","url":null,"abstract":"","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114840475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. R. Ali, K. V. Orden, Kimberly Parkhurst, Shuyang Liu, Viet-Duy Nguyen, P. Duberstein, Ehsan Hoque
{"title":"Aging and Engaging: A Social Conversational Skills Training Program for Older Adults","authors":"M. R. Ali, K. V. Orden, Kimberly Parkhurst, Shuyang Liu, Viet-Duy Nguyen, P. Duberstein, Ehsan Hoque","doi":"10.1145/3172944.3172958","DOIUrl":"https://doi.org/10.1145/3172944.3172958","url":null,"abstract":"We developed 'Aging and Engaging, a web-based intelligent interface, to improve communication skills among older adults. The interface allows users to practice conversations with a virtual assistant and receive feedback on eye contact, speaking volume, smiling, and valence of speech content. Feedback is generated automatically by analyzing the temporal properties of the conversation using the hidden Markov model. The interface was designed with the assistance of an expert advisory panel that works with geriatric patients, as well as a focus group of 12 older adults. To evaluate its effectiveness, we conducted a study with 25 older adults, each of whom participated in four conversations. Participants' response times to questions, as well as the amount of positive feedback, increased gradually through these interactions, as assessed by human judges. Participants found the feedback useful, easy to interpret, and fairly accurate, and expressed their interest in using the system at home. We plan to enroll subjects with difficulties in social communication; have them use the system over time at home in a randomized, controlled study; and measure any changes in their behavior.","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126256477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Model for Detecting and Locating Behaviour Changes in Mobile Touch Targeting Sequences","authors":"Daniel Buschek","doi":"10.1145/3172944.3172952","DOIUrl":"https://doi.org/10.1145/3172944.3172952","url":null,"abstract":"Touch offset models capture users' targeting behaviour patterns across the screen. We present and evaluate the first extension of these models to explicitly address behaviour changes. We focus on user changes in particular: Given only a series of touch/target locations (x, y), our model detects 1) if the user has changed therein, and if so, 2) at which touch. We evaluate our model on smartphone targeting and typing data from the lab (N=28) and field (N=30). The results show that our model can exploit touch targeting sequences to reveal user changes. Our model outperforms existing non-sequence touch offset models and does not require training data. We discuss the model's limitations and ideas for further improvement. We conclude with recommendations for its integration into future touch biometric systems.","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126535987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Interaction Design and Evaluation Methods in Full-Body Interaction for Special Needs","authors":"Ciera Crowell","doi":"10.1145/3172944.3173150","DOIUrl":"https://doi.org/10.1145/3172944.3173150","url":null,"abstract":"This work is focused on the specific properties and evaluation of full-body interaction design of multi-user mixed reality environments. The main goal is to study how full-body interaction can aid in intervention strategies for children with autism, to improve their understanding and adoption of social behaviors with peers and with society in general. The research is based upon HCI theory, aided by general theories of embodied cognition, embodiment and developmental psychology. The main setting of the research is large scale floor-projected mixed environments, which will allow for testing interaction strategies and evaluation methods of experiences based on collocation of multiple users within a full-body interactive scenario, where they can practice interaction in a natural and uninhibited manner. The research consists of designing playful experiences for the target users in order to promote socialization, collaboration and social inclusion. Topics for analysis include understanding the dynamics of goal-oriented and open-ended gameplay, proxemics, and encouraged group collaboration, on the design of these systems. Assessment methods take into account multimodal analysis, including physiology-based data such as electrodermal activity and heart rate, of the children's behavioral and affective states in the experience.","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131593040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quester: A Speech-Based Question Answering Support System for Oral Presentations","authors":"R. Asadi, H. Trinh, H. Fell, T. Bickmore","doi":"10.1145/3172944.3172974","DOIUrl":"https://doi.org/10.1145/3172944.3172974","url":null,"abstract":"Current slideware, such as PowerPoint, reinforces the delivery of linear oral presentations. In settings such as question answering sessions or review lectures, more extemporaneous and dynamic presentations are required. An intelligent system that can automatically identify and display the slides most related to the presenter's speech, allows for more speaker flexibility in sequencing their presentation. We present Quester, a system that enables fast access to relevant presentation content during a question answering session and supports nonlinear presentations led by the speaker. Given the slides contents and notes, the system ranks presentation slides based on semantic closeness to spoken utterances, displays the most related slides, and highlights the corresponding content keywords in slide notes. The design of our system was informed by findings from interviews with expert presenters and analysis of recordings of lectures and conference presentations. In a within-subjects study comparing our dynamic support system with a static slide navigation system during a question answering session, presenters expressed a strong preference for our system and answered the questions more efficiently using our system.","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128614708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Non-instrumented Gesture-free Mid-Air Sketching","authors":"Umema Bohari, Ting-Ju Chen, Vinayak","doi":"10.1145/3172944.3172985","DOIUrl":"https://doi.org/10.1145/3172944.3172985","url":null,"abstract":"Drawing curves in mid-air with fingers is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Mid-air curve input is most commonly accomplished through explicit user input; akin to click-and-drag, the user may use a hand posture (e.g. pinch) or a button-press on an instrumented controller to express the intention to start and stop sketching. In this paper, we present a novel approach to recognize the user's intention to draw or not to draw in a mid-air sketching task without the use of postures or controllers. For every new point recorded in the user's finger trajectory, the idea is to simply classify this point as either hover or stroke. Our work is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. We captured sketch data from users using a haptics device and trained multiple binary classifiers using feature vectors based on the local geometric and motion profile of the trajectory. We present a systematic comparison of these classifiers and discuss the advantages of our approach to spatial curve input applications.","PeriodicalId":117649,"journal":{"name":"23rd International Conference on Intelligent User Interfaces","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129314477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}