Yuanzhe Chen, Qing Chen, Mingqian Zhao, S. Boyer, K. Veeramachaneni, Huamin Qu
{"title":"DropoutSeer: Visualizing learning patterns in Massive Open Online Courses for dropout reasoning and prediction","authors":"Yuanzhe Chen, Qing Chen, Mingqian Zhao, S. Boyer, K. Veeramachaneni, Huamin Qu","doi":"10.1109/VAST.2016.7883517","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883517","url":null,"abstract":"Aiming at massive participation and open access education, Massive Open Online Courses (MOOCs) have attracted millions of learners over the past few years. However, the high dropout rate of learners is considered to be one of the most crucial factors that may hinder the development of MOOCs. To tackle this problem, statistical models have been developed to predict dropout behavior based on learner activity logs. Although predictive models can foresee the dropout behavior, it is still difficult for users to understand the reasons behind the predicted results and further design interventions to prevent dropout. In addition, with a better understanding of dropout, researchers in the area of predictive modeling in turn can improve the models. In this paper, we introduce DropoutSeer, a visual analytics system which not only helps instructors and education experts understand the reasons for dropout, but also allows researchers to identify crucial features which can further improve the performance of the models. Both the heterogeneous data extracted from three different kinds of learner activity logs (i.e., clickstream, forum posts and assignment records) and the predicted results are visualized in the proposed system. Case studies and expert interviews have been conducted to demonstrate the usefulness and effectiveness of DropoutSeer.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122362779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EventAction: Visual analytics for temporal event sequence recommendation","authors":"F. Du, C. Plaisant, N. Spring, B. Shneiderman","doi":"10.1109/VAST.2016.7883512","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883512","url":null,"abstract":"Recommender systems are being widely used to assist people in making decisions, for example, recommending films to watch or books to buy. Despite its ubiquity, the problem of presenting the recommendations of temporal event sequences has not been studied. We propose EventAction, which to our knowledge, is the first attempt at a prescriptive analytics interface designed to present and explain recommendations of temporal event sequences. EventAction provides a visual analytics approach to (1) identify similar records, (2) explore potential outcomes, (3) review recommended temporal event sequences that might help achieve the users' goals, and (4) interactively assist users as they define a personalized action plan associated with a probability of success. Following the design study framework, we designed and deployed EventAction in the context of student advising and reported on the evaluation with a student review manager and three graduate students.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"20 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114132650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tanja Blascheck, Fabian Beck, Sebastian Baltes, T. Ertl, D. Weiskopf
{"title":"Visual analysis and coding of data-rich user behavior","authors":"Tanja Blascheck, Fabian Beck, Sebastian Baltes, T. Ertl, D. Weiskopf","doi":"10.1109/VAST.2016.7883520","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883520","url":null,"abstract":"Investigating user behavior involves abstracting low-level events to higher-level concepts. This requires an analyst to study individual user activities, assign codes which categorize behavior, and develop a consistent classification scheme. To better support this reasoning process of an analyst, we suggest a novel visual analytics approach which integrates rich user data including transcripts, videos, eye movement data, and interaction logs. Word-sized visualizations embedded into a tabular representation provide a space-efficient and detailed overview of user activities. An analyst assigns codes, grouped into code categories, as part of an interactive process. Filtering and searching helps to select specific activities and focus an analysis. A comparison visualization summarizes results of coding and reveals relationships between codes. Editing features support efficient assignment, refinement, and aggregation of codes. We demonstrate the practical applicability and usefulness of our approach in a case study and describe expert feedback.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116860317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siming Chen, Shuai Chen, Zhenhuang Wang, Jie Liang, Xiaoru Yuan, Nan Cao, Yadong Wu
{"title":"D-Map: Visual analysis of ego-centric information diffusion patterns in social media","authors":"Siming Chen, Shuai Chen, Zhenhuang Wang, Jie Liang, Xiaoru Yuan, Nan Cao, Yadong Wu","doi":"10.1109/VAST.2016.7883510","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883510","url":null,"abstract":"Popular social media platforms could rapidly propagate vital information over social networks among a significant number of people. In this work we present D-Map (Diffusion Map), a novel visualization method to support exploration and analysis of social behaviors during such information diffusion and propagation on typical social media through a map metaphor. In D-Map, users who participated in reposting (i.e., resending a message initially posted by others) one central user's posts (i.e., a series of original tweets) are collected and mapped to a hexagonal grid based on their behavior similarities and in chronological order of the repostings. With additional interaction and linking, D-Map is capable of providing visual portraits of the influential users and describing their social behaviors. A comprehensive visual analysis system is developed to support interactive exploration with D-Map. We evaluate our work with real world social media data and find interesting patterns among users. Key players, important information diffusion paths, and interactions among social communities can be identified.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115771748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Heimerl, M. John, Qi Han, Steffen Koch, T. Ertl
{"title":"DocuCompass: Effective exploration of document landscapes","authors":"Florian Heimerl, M. John, Qi Han, Steffen Koch, T. Ertl","doi":"10.1109/VAST.2016.7883507","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883507","url":null,"abstract":"The creation of interactive visualization to analyze text documents has gained an impressive momentum in recent years. This is not surprising in the light of massive and still increasing amounts of available digitized texts. Websites, social media, news wire, and digital libraries are just few examples of the diverse text sources whose visual analysis and exploration offers new opportunities to effectively mine and manage the information and knowledge hidden within them. A popular visualization method for large text collections is to represent each document by a glyph in 2D space. These landscapes can be the result of optimizing pairwise distances in 2D to represent document similarities, or they are provided directly as meta data, such as geo-locations. For well-defined information needs, suitable interaction methods are available for these spatializations. However, free exploration and navigation on a level of abstraction between a labeled document spatialization and reading single documents is largely unsupported. As a result, vital foraging steps for task-tailored actions, such as selecting subgroups of documents for detailed inspection, or subsequent sense-making steps are hampered. To fill in this gap, we propose DocuCompass, a focus+context approach based on the lens metaphor. It comprises multiple methods to characterize local groups of documents, and to efficiently guide exploration based on users' requirements. DocuCompass thus allows for effective interactive exploration of document landscapes without disrupting the mental map of users by changing the layout itself. We discuss the suitability of multiple navigation and characterization methods for different spatializations and texts. Finally, we provide insights generated through user feedback and discuss the effectiveness of our approach.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126365826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Muthumanickam, K. Vrotsou, M. Cooper, J. Johansson
{"title":"Shape grammar extraction for efficient query-by-sketch pattern matching in long time series","authors":"P. Muthumanickam, K. Vrotsou, M. Cooper, J. Johansson","doi":"10.1109/VAST.2016.7883518","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883518","url":null,"abstract":"Long time-series, involving thousands or even millions of time steps, are common in many application domains but remain very difficult to explore interactively. Often the analytical task in such data is to identify specific patterns, but this is a very complex and computationally difficult problem and so focusing the search in order to only identify interesting patterns is a common solution. We propose an efficient method for exploring user-sketched patterns, incorporating the domain expert's knowledge, in time series data through a shape grammar based approach. The shape grammar is extracted from the time series by considering the data as a combination of basic elementary shapes positioned across different amplitudes. We represent these basic shapes using a ratio value, perform binning on ratio values and apply a symbolic approximation. Our proposed method for pattern matching is amplitude-, scale- and translation-invariant and, since the pattern search and pattern constraint relaxation happen at the symbolic level, is very efficient permitting its use in a real-time/online system. We demonstrate the effectiveness of our method in a case study on stock market data although it is applicable to any numeric time series data.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115180215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ji Hwan Park, S. Nadeem, Seyedkoosha Mirhosseini, A. Kaufman
{"title":"C2A: Crowd consensus analytics for virtual colonoscopy","authors":"Ji Hwan Park, S. Nadeem, Seyedkoosha Mirhosseini, A. Kaufman","doi":"10.1109/VAST.2016.7883508","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883508","url":null,"abstract":"We present a medical crowdsourcing visual analytics platform called C2A to visualize, classify and filter crowdsourced clinical data. More specifically, C2A is used to build consensus on a clinical diagnosis by visualizing crowd responses and filtering out anomalous activity. Crowdsourcing medical applications have recently shown promise where the non-expert users (the crowd) were able to achieve accuracy similar to the medical experts. This has the potential to reduce interpretation/reading time and possibly improve accuracy by building a consensus on the findings beforehand and letting the medical experts make the final diagnosis. In this paper, we focus on a virtual colonoscopy (VC) application with the clinical technicians as our target users, and the radiologists acting as consultants and classifying segments as benign or malignant. In particular, C2A is used to analyze and explore crowd responses on video segments, created from fly-throughs in the virtual colon. C2A provides several interactive visualization components to build crowd consensus on video segments, to detect anomalies in the crowd data and in the VC video segments, and finally, to improve the non-expert user's work quality and performance by A/B testing for the optimal crowdsourcing platform and application-specific parameters. Case studies and domain experts feedback demonstrate the effectiveness of our framework in improving workers' output quality, the potential to reduce the radiologists' interpretation time, and hence, the potential to improve the traditional clinical workflow by marking the majority of the video segments as benign based on the crowd consensus.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115017345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiting Wang, Shixia Liu, Yang Chen, Tai-Quan Peng, Jing Su, J. Yang, B. Guo
{"title":"How ideas flow across multiple social groups","authors":"Xiting Wang, Shixia Liu, Yang Chen, Tai-Quan Peng, Jing Su, J. Yang, B. Guo","doi":"10.1109/VAST.2016.7883511","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883511","url":null,"abstract":"Tracking how correlated ideas flow within and across multiple social groups facilitates the understanding of the transfer of information, opinions, and thoughts on social media. In this paper, we present IdeaFlow, a visual analytics system for analyzing the lead-lag changes within and across pre-defined social groups regarding a specific set of correlated ideas, each of which is described by a set of words. To model idea flows accurately, we develop a random-walk-based correlation model and integrate it with Bayesian conditional cointegration and a tensor-based technique. To convey complex lead-lag relationships over time, IdeaFlow combines the strengths of a bubble tree, a flow map, and a timeline. In particular, we develop a Voronoi-treemap-based bubble tree to help users get an overview of a set of ideas quickly. A correlated-clustering-based layout algorithm is used to simultaneously generate multiple flow maps with less ambiguity. We also introduce a focus+context timeline to explore huge amounts of temporal data at different levels of time granularity. Quantitative evaluation and case studies demonstrate the accuracy and effectiveness of IdeaFlow.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124460844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. H. Nguyen, Kai Xu, A. Bardill, Betul Salman, Kate Herd, B. Wong
{"title":"SenseMap: Supporting browser-based online sensemaking through analytic provenance","authors":"P. H. Nguyen, Kai Xu, A. Bardill, Betul Salman, Kate Herd, B. Wong","doi":"10.1109/VAST.2016.7883515","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883515","url":null,"abstract":"Sensemaking is described as the process in which people collect, organize and create representations of information, all centered around some problem they need to understand. People often get lost when solving complicated tasks using big datasets over long periods of exploration and analysis. They may forget what they have done, are unaware of where they are in the context of the overall task, and are unsure where to continue. In this paper, we introduce a tool, SenseMap, to address these issues in the context of browser-based online sensemaking. We conducted a semi-structured interview with nine participants to explore their behaviors in online sensemaking with existing browser functionality. A simplified sensemaking model based on Pirolli and Card's model is derived to better represent the behaviors we found: users iteratively collect information sources relevant to the task, curate them in a way that makes sense, and finally communicate their findings to others. SenseMap automatically captures provenance of user sensemaking actions and provides multi-linked views to visualize the collected information and enable users to curate and communicate their findings. To explore how SenseMap is used, we conducted a user study in a naturalistic work setting with five participants completing the same sensemaking task related to their daily work activities. All participants found the visual representation and interaction of the tool intuitive to use. Three of them engaged with the tool and produced successful outcomes. It helped them to organize information sources, to quickly find and navigate to the sources they wanted, and to effectively communicate their findings.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122929312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SocialBrands: Visual analysis of public perceptions of brands on social media","authors":"Xiaotong Liu, Anbang Xu, Liang Gou, Haibin Liu, R. Akkiraju, Han-Wei Shen","doi":"10.1109/VAST.2016.7883513","DOIUrl":"https://doi.org/10.1109/VAST.2016.7883513","url":null,"abstract":"Public perceptions of a brand is critical to its performance. While social media has demonstrated a huge potential to shape public perceptions of brands, existing tools are not intuitive and explanatory for domain users to use as they fail to provide a comprehensive analysis framework for perceptions of brands. In this paper, we present SocialBrands, a novel visual analysis tool for brand managers to understand public perceptions of brands on social media. Social-Brands leverages brand personality framework in marketing literature and social computing approaches to compute the personality of brands from three driving factors (user imagery, employee imagery, and official announcement) on social media, and construct an evidence network explaining the association between brand personality and driving factors. These computational results are then integrated with new interactive visualizations to help brand managers understand personality traits and their driving factors. We demonstrate the usefulness and effectiveness of SocialBrands through a series of user studies with brand managers in an enterprise context. Design lessons are also derived from our studies.","PeriodicalId":357817,"journal":{"name":"2016 IEEE Conference on Visual Analytics Science and Technology (VAST)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132015842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}