{"title":"Session details: From touch through air to brain","authors":"M. Zancanaro","doi":"10.1145/3260901","DOIUrl":"https://doi.org/10.1145/3260901","url":null,"abstract":"","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129719516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teaching motion gestures via recognizer feedback","authors":"A. Kamal, Yang Li, E. Lank","doi":"10.1145/2557500.2557521","DOIUrl":"https://doi.org/10.1145/2557500.2557521","url":null,"abstract":"When using motion gestures, 3D movements of a mobile phone, as an input modality, one significant challenge is how to teach end users the movement parameters necessary to successfully issue a command. Is a simple video or image depicting movement of a smartphone sufficient? Or do we need three-dimensional depictions of movement on external screens to train users? In this paper, we explore mechanisms to teach end users motion gestures, examining two factors. The first factor is how to represent motion gestures: as icons that describe movement, video that depicts movement using the smartphone screen, or a Kinect-based teaching mechanism that captures and depicts the gesture on an external display in three-dimensional space. The second factor we explore is recognizer feedback, i.e. a simple representation of the proximity of a motion gesture to the desired motion gesture based on a distance metric extracted from the recognizer. We show that, by combining video with recognizer feedback, participants master motion gestures equally quickly as end users that learn using a Kinect. These results demonstrate the viability of training end users to perform motion gestures using only the smartphone display.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126805701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Who will retweet this?: Automatically Identifying and Engaging Strangers on Twitter to Spread Information","authors":"Kyumin Lee, J. Mahmud, Jilin Chen, Michelle X. Zhou, Jeffrey Nichols","doi":"10.1145/2557500.2557502","DOIUrl":"https://doi.org/10.1145/2557500.2557502","url":null,"abstract":"There has been much effort on studying how social media sites, such as Twitter, help propagate information in different situations, including spreading alerts and SOS messages in an emergency. However, existing work has not addressed how to actively identify and engage the right strangers at the right time on social media to help effectively propagate intended information within a desired time frame. To ad-dress this problem, we have developed two models: (i) a feature-based model that leverages peoplesfi exhibited social behavior, including the content of their tweets and social interactions, to characterize their willingness and readiness to propagate information on Twitter via the act of retweeting; and (ii) a wait-time model based on a user's previous retweeting wait times to predict her next retweeting time when asked. Based on these two models, we build a recommender system that predicts the likelihood of a stranger to retweet information when asked, within a specific time window, and recommends the top-N qualified strangers to engage with. Our experiments, including live studies in the real world, demonstrate the effectiveness of our work.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126977232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ron Artstein, D. Traum, O. Alexander, A. Leuski, Andrew Jones, Kallirroi Georgila, P. Debevec, W. Swartout, Heather Maio, Stephen Smith
{"title":"Time-offset interaction with a holocaust survivor","authors":"Ron Artstein, D. Traum, O. Alexander, A. Leuski, Andrew Jones, Kallirroi Georgila, P. Debevec, W. Swartout, Heather Maio, Stephen Smith","doi":"10.1145/2557500.2557540","DOIUrl":"https://doi.org/10.1145/2557500.2557540","url":null,"abstract":"Time-offset interaction is a new technology that allows for two-way communication with a person who is not available for conversation in real time: a large set of statements are prepared in advance, and users access these statements through natural conversation that mimics face-to-face interaction. Conversational reactions to user questions are retrieved through a statistical classifier, using technology that is similar to previous interactive systems with synthetic characters; however, all of the retrieved utterances are genuine statements by a real person. Recordings of answers, listening and idle behaviors, and blending techniques are used to create a persistent visual image of the person throughout the interaction. A proof-of-concept has been implemented using the likeness of Pinchas Gutter, a Holocaust survivor, enabling short conversations about his family, his religious views, and resistance. This proof-of-concept has been shown to dozens of people, from school children to Holocaust scholars, with many commenting on the impact of the experience and potential for this kind of interface.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130566143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequence: interactive mining and visualization of temporal frequent event sequences","authors":"Adam Perer, Fei Wang","doi":"10.1145/2557500.2557508","DOIUrl":"https://doi.org/10.1145/2557500.2557508","url":null,"abstract":"Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"2012 25","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113966256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Luo, Fei Wang, Michelle X. Zhou, Yingxin Pan, Hang Chen
{"title":"Who have got answers?: growing the pool of answerers in a smart enterprise social QA system","authors":"Lin Luo, Fei Wang, Michelle X. Zhou, Yingxin Pan, Hang Chen","doi":"10.1145/2557500.2557531","DOIUrl":"https://doi.org/10.1145/2557500.2557531","url":null,"abstract":"On top of an enterprise social platform, we are building a smart social QA system that automatically routes questions to suitable employees who are willing, able, and ready to provide answers. Due to a lack of social QA history (training data) to start with, in this paper, we present an optimization-based approach that recommends both top-matched active (seed) and inactive (prospect) answerers for a given question. Our approach includes three parts. First, it uses a predictive model to find top-ranked seed answerers by their fitness, including their ability and willingness, to answer a question. Second, it uses distance metric learning to discover prospects most similar to the seeds identified in the first step. Third, it uses a constraint-based approach to balance the selection of both seeds and prospects identified in the first two steps. As a result, not only does our solution route questions to top-matched active users, but it also engages inactive users to grow the pool of answerers. Our real-world experiments that routed 114 questions to 684 people identified from 400,000+ employees included 641 prospects (93.7%) and achieved about 70% answering rate with 83% of answers received a lot/full confidence.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127570173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing expert solutions in exploratory learning environments using plan recognition","authors":"Or Seri, Y. Gal","doi":"10.1145/2557500.2557520","DOIUrl":"https://doi.org/10.1145/2557500.2557520","url":null,"abstract":"Exploratory Learning Environments (ELE) are open-ended and flexible software, supporting interaction styles that include exogenous actions and trial-and-error. This paper shows that using AI techniques to visualize worked examples in ELEs improves students' generalization of mathematical concepts across problems, as measured by their performance. Students were exposed to a worked example of a problem solution using an ELE for statistics education. One group in the study was presented with a hierarchical plan of relevant activities that emphasized the sub-goals and the structure relating to the solution. This visualization used an AI algorithm to match a log of activities in the ELEs to ideal solutions. We measured students' performance when using the ELE to solve new problems that required generalization of concepts introduced in the example solution. The results showed that students who were shown the plan visualization significantly outperformed other students who were presented with a step-by-step list of actions in the software used to generate the same solution to the example problem. Analysis of students' explanations of the problem solution shows that the students in the former condition also demonstrated deeper understanding of the solution process. These results demonstrate the benefit to students when using AI technology to visualize worked examples in ELEs and suggests future applications of this approach to actively support students' learning and teachers' understanding of students' activities.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128288182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakub Dostal, Uta Hinrichs, P. Kristensson, A. Quigley
{"title":"SpiderEyes: designing attention- and proximity-aware collaborative interfaces for wall-sized displays","authors":"Jakub Dostal, Uta Hinrichs, P. Kristensson, A. Quigley","doi":"10.1145/2557500.2557541","DOIUrl":"https://doi.org/10.1145/2557500.2557541","url":null,"abstract":"With the proliferation of large multi-faceted datasets, a critical question is how to design collaborative environments, in which this data can be analysed in an efficient and insightful manner. Exploiting people's movements and distance to the data display and to collaborators, proxemic interactions can potentially support such scenarios in a fluid and seamless way, supporting both tightly coupled collaboration as well as parallel explorations. In this paper we introduce the concept of collaborative proxemics: enabling groups of people to collaboratively use attention- and proximity-aware applications. To help designers create such applications we have developed SpiderEyes: a system and toolkit for designing attention- and proximity-aware collaborative interfaces for wall-sized displays. SpiderEyes is based on low-cost technology and allows accurate markerless attention-aware tracking of multiple people interacting in front of a display in real-time. We discuss how this toolkit can be applied to design attention- and proximity-aware collaborative scenarios around large wall-sized displays, and how the information visualisation pipeline can be extended to incorporate proxemic interactions.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121837497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhensong Zhang, Fengjun Zhang, Hui Chen, Jiasheng Liu, Hongan Wang, G. Dai
{"title":"Left and right hand distinction for multi-touch tabletop interactions","authors":"Zhensong Zhang, Fengjun Zhang, Hui Chen, Jiasheng Liu, Hongan Wang, G. Dai","doi":"10.1145/2557500.2557525","DOIUrl":"https://doi.org/10.1145/2557500.2557525","url":null,"abstract":"In multi-touch interactive systems, it is of great significance to distinguish which hand of the user is touching the surface in real time. Left-right hand distinction is essential for recognizing the multi-finger gestures and further fully exploring the potential of bimanual interaction. However, left-right hand distinction is beyond the capability of most existing multi-touch systems. In this paper, we present a new method for left and right hand distinction based on the human anatomy, work area, finger orientation and finger position. Considering the ergonomics principles of gesture designing, the body-forearm triangle model was proposed. Furthermore, a heuristic algorithm was introduced to group multi-touch contact points and then made left-right hand distinction. A dataset of 2880 images has been set up to evaluate the proposed left-right hand distinction method. The experimental results demonstrate that our method can guarantee the high recognition accuracy and real time performance in freely bimanual multi-touch interactions.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131139385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Steptorials: mixed-initiative learning of high-functionality applications","authors":"H. Lieberman, Elizabeth Rosenzweig, C. Fry","doi":"10.1145/2557500.2557543","DOIUrl":"https://doi.org/10.1145/2557500.2557543","url":null,"abstract":"How can a new user learn an unfamiliar application, especially if it is a high-functionality (hi-fun) application, like Photoshop, Excel, or programming language IDEfi Many applications provide introductory videos, illustrative examples, and documentation on individual operations. Tests show, however, that novice users are likely to ignore the provided help, and try to learn by exploring the application first. In a hi-fun application, though, the user may lack understanding of the basic concepts of an application's operation, even though they were likely explained in the (ignored) documentation. This paper introduces steptorials (\"stepper tutorials\"), a new interaction strategy for learning hi-fun applications. A steptorial aims to teach the user how to work through a simple, but nontrivial, example of using the application. Steptorials are unique because they allow varying the autonomy of the user at every step. A steptorial has a control structure of a reversible programming language stepper. The user may choose, at any time, to be shown how to do a step, be guided through it, use the application interface without constraint, or to return to a previous step. It reduces the risk in either trying new operations yourself, or conversely, the risk of ceding control to the computer. It introduces a new paradigm of mixed-initiative learning of application interfaces.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130383762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}