{"title":"“All Right, Mr. DeMille, I’m Ready for My Closeup:” Adding Meaning to User Actions from Video for Immersive Analytics","authors":"A. Batch, N. Elmqvist","doi":"10.1109/MLUI52769.2019.10075557","DOIUrl":"https://doi.org/10.1109/MLUI52769.2019.10075557","url":null,"abstract":"While the use of machine learning and computer vision to classify human behavior has grown into a large, well-established, interdisciplinary area of research, one area that is somewhat overlooked is the intersection of computer vision as a tool for evaluating user behavior in Virtual Reality, particularly in the context of immersive analytics and visualization. We draw on the literature from pattern recognition, computer vision, and machine learning to compose a simple, comparatively resource-cheap pipeline for camera-based extraction of features of professional analyst users and of their sessions in an existing VR visualization system, ImAxes. Our results show high accuracy in predicting self-reported features of the users, even as survey responses about user experience with the immersive interface are somewhat ambiguous in varying based on these features.","PeriodicalId":297242,"journal":{"name":"2019 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John E. Wenskovitch, Michelle Dowling, Laura Grose, Chris North, Remco Chang, A. Endert, David H. Rogers
{"title":"Machine Learning from User Interaction for Visualization and Analytics: A Workshop-Generated Research Agenda","authors":"John E. Wenskovitch, Michelle Dowling, Laura Grose, Chris North, Remco Chang, A. Endert, David H. Rogers","doi":"10.1109/MLUI52769.2019.10075560","DOIUrl":"https://doi.org/10.1109/MLUI52769.2019.10075560","url":null,"abstract":"At IEEE VIS 2018, we organized the Machine Learning from User Interaction for Visualization and Analytics workshop. The goal of this workshop was to bring together researchers from across the visualization community to discuss how visualization can benefit from machine learning, with a particular interest in learning from user interaction to improve visualization systems. Following the discussion at the workshop, we aggregated and categorized the ideas, questions, and issues raised by participants over the course of the morning. The result of this compilation is the research agenda presented in this work.","PeriodicalId":297242,"journal":{"name":"2019 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126747113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepVA: Bridging Cognition and Computation through Semantic Interaction and Deep Learning","authors":"Yail Bian, John E. Wenskovitch, Chris North","doi":"10.1109/MLUI52769.2019.10075565","DOIUrl":"https://doi.org/10.1109/MLUI52769.2019.10075565","url":null,"abstract":"This paper examines how deep learning (DL) representations, in contrast to traditional engineered features, can support semantic interaction (SI) in visual analytics. SI attempts to model user’s cognitive reasoning via their interaction with data items, based on the data features. We hypothesize that DL representations contain meaningful high-level abstractions that can better capture users’ high-level cognitive intent. To bridge the gap between cognition and computation in visual analytics, we propose DeepVA (Deep Visual Analytics), which uses high-level deep learning representations for semantic interaction instead of low-level hand-crafted data features. To evaluate DeepVA and compare to SI models with lower-level features, we design and implement a system that extends a traditional SI pipeline with features at three different levels of abstraction. To test the relationship between task abstraction and feature abstraction in SI, we perform visual concept learning tasks at three different task abstraction levels, using semantic interaction with three different feature abstraction levels. DeepVA effectively hastened interactive convergence between cognitive understanding and computational modeling of the data, especially in high abstraction tasks.","PeriodicalId":297242,"journal":{"name":"2019 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129838733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Sevastjanova, H. Schäfer, J. Bernard, D. Keim, Mennatallah El-Assady
{"title":"Shall we play? – Extending the Visual Analytics Design Space through Gameful Design Concepts","authors":"R. Sevastjanova, H. Schäfer, J. Bernard, D. Keim, Mennatallah El-Assady","doi":"10.1109/MLUI52769.2019.10075563","DOIUrl":"https://doi.org/10.1109/MLUI52769.2019.10075563","url":null,"abstract":"Many interactive machine learning workflows in the context of visual analytics encompass the stages of exploration, verification, and knowledge communication. Within these stages, users perform various types of actions based on different human needs. In this position paper, we postulate expanding this workflow by introducing gameful design elements. These can increase a user’s motivation to take actions, to improve a model’s quality, or to exchange insights with others. By combining concepts from visual analytics, human psychology, and gamification, we derive a model for augmenting the visual analytics processes with game mechanics. We argue for automatically learning a parametrization of these game mechanics based on a continuous evaluation of the users’ actions and analysis results. To demonstrate our proposed conceptual model, we illustrate how three existing visual analytics techniques could benefit from incorporating tailored game dynamics. Lastly, we discuss open challenges and point out potential implications for future research.","PeriodicalId":297242,"journal":{"name":"2019 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114441824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Machine Learning and Visualization for Qualitative Inductive Analyses of Big Data","authors":"H. Muthukrishnan, D. Szafir","doi":"10.1109/MLUI52769.2019.10075566","DOIUrl":"https://doi.org/10.1109/MLUI52769.2019.10075566","url":null,"abstract":"Many domains require analyst expertise to determine what patterns and data are interesting in a corpus. However, most analytics tools attempt to prequalify “interestingness” using algorithmic approaches to provide exploratory overviews. This overview-driven workflow precludes the use of qualitative analysis methodologies in large datasets. This paper discusses a preliminary visual analytics approach demonstrating how visual analytics tools can instead enable expert-driven qualitative analyses at scale by supporting computer-in-the-loop mixed initiative approaches. We argue that visual analytics tools can support rich qualitative inference by using machine learning methods to continually model and refine what features correlate to an analyst’s on-going qualitative observations and by providing transparency into these features in order to aid analysts in navigating large corpora during qualitative analyses. We illustrate these ideas through an example from social media analysis and discuss open opportunities for designing visualizations that support qualitative inference through computer-in-the-loop approaches.","PeriodicalId":297242,"journal":{"name":"2019 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129669243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}