{"title":"Explaining Recommendations Based on Feature Sentiments in Product Reviews","authors":"Li Chen, Feng Wang","doi":"10.1145/3025171.3025173","DOIUrl":"https://doi.org/10.1145/3025171.3025173","url":null,"abstract":"The explanation interface has been recognized important in recommender systems as it can help users evaluate recommendations in a more informed way for deciding which ones are relevant to their interests. In different decision environments, the specific aim of explanation can be different. In high-investment product domains (e.g., digital cameras, laptops) for which users usually attempt to avoid financial risk, how to support users to construct stable preferences and make better decisions is particularly crucial. In this paper, we propose a novel explanation interface that emphasizes explaining the tradeoff properties within a set of recommendations in terms of both their static specifications and feature sentiments extracted from product reviews. The objective is to assist users in more effectively exploring and understanding product space, and being able to better formulate their preferences for products by learning from other customers' experiences. Through two user studies (in form of both before-after and within-subjects experiments), we empirically identify the practical role of feature sentiments in combination with static specifications in producing tradeoff-oriented explanations. Specifically, we find that our explanation interface can be more effective to increase users' product knowledge, preference certainty, perceived information usefulness, recommendation transparency and quality, and purchase intention.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124340777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Untangling the Relationship Between Spatial Skills, Game Features, and Gender in a Video Game","authors":"H. Wauck, Ziang Xiao, Po-Tsung Chiu, W. Fu","doi":"10.1145/3025171.3025225","DOIUrl":"https://doi.org/10.1145/3025171.3025225","url":null,"abstract":"Certain commercial video games, such as Portal 2 and Tetris, have been empirically shown to train spatial reasoning skills, a subset of cognitive skills essential for success in STEM disciplines. However, no research to date has attempted to understand which specific features in these games tap into players' spatial ability or how individual player differences interact with these game features. This knowledge is crucially important as a first step towards understanding what makes these games effective and why, especially for subpopulations with lower spatial ability such as women and girls. We present the first empirical study analyzing the relationship between spatial ability, specific game features, and individual player differences using a custom-built computer game. Twenty children took a pretest of spatial skills and then played our game for 2 hours. We found that spatial ability pretest scores predicted several player behaviors related to in-game tasks involving 3D object construction and first person navigation. However, when analyzed by gender, girls' pretest scores were much less predictive of player behavior.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125033546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Label-and-Learn: Visualizing the Likelihood of Machine Learning Classifier's Success During Data Labeling","authors":"Yunjia Sun, E. Lank, Michael A. Terry","doi":"10.1145/3025171.3025208","DOIUrl":"https://doi.org/10.1145/3025171.3025208","url":null,"abstract":"While machine learning is a powerful tool for the analysis and classification of complex real-world datasets, it is still challenging, particularly for developers with limited expertise, to incorporate this technology into their software systems. The first step in machine learning, data labeling, is traditionally thought of as a tedious, unavoidable task in building a machine learning classifier. However, in this paper, we argue that it can also serve as the first opportunity for developers to gain insight into their dataset. Through a Label-and-Learn interface, we explore visualization strategies that leverage the data labeling task to enhance developers' knowledge about their dataset, including the likely success of the classifier and the rationale behind the classifier's decisions. At the same time, we show that the visualizations also improve users' labeling experience by showing them the impact they have made on classifier performance. We assess the visualizations in Label-and-Learn and experimentally demonstrate their value to software developers who seek to assess the utility of machine learning during the data labeling process.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114316937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fajie Yuan, G. Guo, J. Jose, Long Chen, Haitao Yu, Weinan Zhang
{"title":"BoostFM: Boosted Factorization Machines for Top-N Feature-based Recommendation","authors":"Fajie Yuan, G. Guo, J. Jose, Long Chen, Haitao Yu, Weinan Zhang","doi":"10.1145/3025171.3025211","DOIUrl":"https://doi.org/10.1145/3025171.3025211","url":null,"abstract":"Feature-based matrix factorization techniques such as Factorization Machines (FM) have been proven to achieve impressive accuracy for the rating prediction task. However, most common recommendation scenarios are formulated as a top-N item ranking problem with implicit feedback (e.g., clicks, purchases)rather than explicit ratings. To address this problem, with both implicit feedback and feature information, we propose a feature-based collaborative boosting recommender called BoostFM, which integrates boosting into factorization models during the process of item ranking. Specifically, BoostFM is an adaptive boosting framework that linearly combines multiple homogeneous component recommenders, which are repeatedly constructed on the basis of the individual FM model by a re-weighting scheme. Two ways are proposed to efficiently train the component recommenders from the perspectives of both pairwise and listwise Learning-to-Rank (L2R). The properties of our proposed method are empirically studied on three real-world datasets. The experimental results show that BoostFM outperforms a number of state-of-the-art approaches for top-N recommendation.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127750403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangmin Fan, Wencan Luo, Muhsin Menekse, D. Litman, Jingtao Wang
{"title":"Scaling Reflection Prompts in Large Classrooms via Mobile Interfaces and Natural Language Processing","authors":"Xiangmin Fan, Wencan Luo, Muhsin Menekse, D. Litman, Jingtao Wang","doi":"10.1145/3025171.3025204","DOIUrl":"https://doi.org/10.1145/3025171.3025204","url":null,"abstract":"We present the iterative design, prototype, and evaluation of CourseMIRROR (Mobile In-situ Reflections and Review with Optimized Rubrics), an intelligent mobile learning system that uses natural language processing (NLP) techniques to enhance instructor-student interactions in large classrooms. CourseMIRROR enables streamlined and scaffolded reflection prompts by: 1) reminding and collecting students' in-situ written reflections after each lecture; 2) continuously monitoring the quality of a student's reflection at composition time and generating helpful feedback to scaffold reflection writing; and 3) summarizing the reflections and presenting the most significant ones to both instructors and students. Through a combination of a 60-participant lab study and eight semester-long deployments involving 317 students, we found that the reflection and feedback cycle enabled by CourseMIRROR is beneficial to both instructors and students. Furthermore, the reflection quality feedback feature can encourage students to compose more specific and higher-quality reflections, and the algorithms in CourseMIRROR are both robust to cold start and scalable to STEM courses in diverse topics.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130433272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Santos, Kevin Hutchinson, Vassilis-Javed Khan, P. Markopoulos
{"title":"Measuring Self-Esteem with Games","authors":"C. Santos, Kevin Hutchinson, Vassilis-Javed Khan, P. Markopoulos","doi":"10.1145/3025171.3025196","DOIUrl":"https://doi.org/10.1145/3025171.3025196","url":null,"abstract":"Self-esteem is a personality trait utilized to support the diagnosis of several psychological conditions. With this study we investigate the potential that computer games can have in assessing self-esteem. To that end, we designed and developed a platformer game and analyzed how in-game behavior relates to Rosenberg's Self-Esteem Scale. We examined: i) how a player's self-esteem influences game performance, ii) how a player's self-esteem generally influences in-game behavior iii) the possible game mechanics that assist in inferring a player's self-esteem. The study was conducted in two phases (N=98 and N=85). Results indicate that self-esteem does not have any impact on the player's performance, on the other hand, we found that players' self-evaluation of game performance correlates with their self-esteem.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130502775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhishek Kumar, Kushal Srivastava, Kuldeep Yadav, Om Deshmukh
{"title":"Multi-faceted Index driven Navigation for Educational Videos in Mobile Phones","authors":"Abhishek Kumar, Kushal Srivastava, Kuldeep Yadav, Om Deshmukh","doi":"10.1145/3025171.3025221","DOIUrl":"https://doi.org/10.1145/3025171.3025221","url":null,"abstract":"One of the challenges that is holding back wide spread consumption of educational videos on mobile devices is the lack of mobile interfaces which can provide efficient video navigation capabilities. In this paper, we utilize multi-modal data analysis techniques which include analysis of the spoken content and the written content of the video, to create a multi-faceted index. We present a novel and first-of-its-kind mobile interface which uses aforementioned multi-faceted index to provide intuitive, usable, and efficient way to navigate through a video. The efficacy of the proposed multi-faceted index driven mobile interface for non-linear navigation is demonstrated through a preliminary user study of 15 participants. We demonstrate that the proposed interface leads to statistically significant savings in navigation time as compared to that of a baseline interface used by leading e-learning providers.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125412990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nabil Bin Hannan, Khalid Tearo, Joseph W. Malloch, Derek F. Reilly
{"title":"Once More, With Feeling: Expressing Emotional Intensity in Touchscreen Gestures","authors":"Nabil Bin Hannan, Khalid Tearo, Joseph W. Malloch, Derek F. Reilly","doi":"10.1145/3025171.3025182","DOIUrl":"https://doi.org/10.1145/3025171.3025182","url":null,"abstract":"In this paper, we explore how people use touchscreens to express emotional intensity, and whether these intensities can be understood by oneself at a later date or by others. In a controlled study, 26 participants were asked to express a set of emotions mapped to predefined gestures, at range of different intensities. One week later, participants were asked to identify the emotional intensity visualized in animations of the gestures made by themselves and by other participants. Our participants expressed emotional intensity using gesture length, pressure, and speed primarily; the choice of attributes was impacted by the specific emotion, and the range and rate of increase of these attributes varied by individual and by emotion. Recognition accuracy of emotional intensity was higher at extreme ends, and was higher for one's own gestures than those made by others. The attributes of size and pressure (mapped to color in the animation) were most readily interpreted, while speed was more difficult to differentiate. We discuss human gesture drawing patterns to express emotional intensities and implications for developers of annotation systems and other touchscreen interfaces that wish to capture affect.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124156942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Motion-Gesture Recognizer Error Pattern on User Workload and Behavior","authors":"Keiko Katsuragawa, A. Kamal, E. Lank","doi":"10.1145/3025171.3025234","DOIUrl":"https://doi.org/10.1145/3025171.3025234","url":null,"abstract":"Bi-level thresholding is a motion gesture recognition technique that mediates between false positives, and false negatives by using two threshold levels: a tighter threshold that limits false positives and recognition errors, and a looser threshold that prevents repeated errors (false negatives) by analyzing movements in sequence. In this paper, we examine the effects of bi-level thresholding on the workload and acceptance of end-users. Using a wizard-of-Oz recognizer, we hold recognition rates constant and adjust for fixed versus bi-level thresholding. Given identical recognition rates, we show that systems using bi-level thresholding result in significant lower workload scores on the NASA-TLX and accelerometer variance. Overall, these results argue for the viability of bi-level thresholding as an effective technique for balancing between false positives, recognition errors and false negatives.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125659820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Machine Learning via a GPU-accelerated Toolkit","authors":"Biye Jiang, J. Canny","doi":"10.1145/3025171.3025172","DOIUrl":"https://doi.org/10.1145/3025171.3025172","url":null,"abstract":"Machine learning is growing in importance in industry, sciences, and many other fields. In many and perhaps most of these applications, users need to trade off competing goals. Machine learning, however, has evolved around the optimization of a single, usually narrowly-defined criterion. In most cases, an expert makes (or should be making) trade-offs between these criteria which requires high-level (human) intelligence. With interactive customization and optimization the expert can incorporate secondary criteria into the model-generation process in an interactive way. In this paper we develop the techniques to perform customized and interactive model optimization, and demonstrate the approach on several examples. The keys to our approach are (i) a machine learning architecture which is modular and supports primary and secondary loss functions, while users can directly manipulate its parameters during training (ii) high-performance training so that non-trivial models can be trained in real-time (using roofline design and GPU hardware), and (iii) highly-interactive visualization tools that support dynamic creation of visualizations and controls to match various optimization criteria.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133069876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}