C. Schulz, Arlind Nocaj, Mennatallah El-Assady, S. Frey, Marcel Hlawatsch, Michael Hund, G. Karch, Rudolf Netzel, Christin Schätzle, Miriam Butt, D. Keim, T. Ertl, U. Brandes, D. Weiskopf
{"title":"Generative Data Models for Validation and Evaluation of Visualization Techniques","authors":"C. Schulz, Arlind Nocaj, Mennatallah El-Assady, S. Frey, Marcel Hlawatsch, Michael Hund, G. Karch, Rudolf Netzel, Christin Schätzle, Miriam Butt, D. Keim, T. Ertl, U. Brandes, D. Weiskopf","doi":"10.1145/2993901.2993907","DOIUrl":"https://doi.org/10.1145/2993901.2993907","url":null,"abstract":"We argue that there is a need for substantially more research on the use of generative data models in the validation and evaluation of visualization techniques. For example, user studies will require the display of representative and uncon-founded visual stimuli, while algorithms will need functional coverage and assessable benchmarks. However, data is often collected in a semi-automatic fashion or entirely hand-picked, which obscures the view of generality, impairs availability, and potentially violates privacy. There are some sub-domains of visualization that use synthetic data in the sense of generative data models, whereas others work with real-world-based data sets and simulations. Depending on the visualization domain, many generative data models are \"side projects\" as part of an ad-hoc validation of a techniques paper and thus neither reusable nor general-purpose. We review existing work on popular data collections and generative data models in visualization to discuss the opportunities and consequences for technique validation, evaluation, and experiment design. We distill handling and future directions, and discuss how we can engineer generative data models and how visualization research could benefit from more and better use of generative data models.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125906111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Muthumanickam, C. Forsell, K. Vrotsou, J. Johansson, M. Cooper
{"title":"Supporting Exploration of Eye Tracking Data: Identifying Changing Behaviour Over Long Durations","authors":"P. Muthumanickam, C. Forsell, K. Vrotsou, J. Johansson, M. Cooper","doi":"10.1145/2993901.2993905","DOIUrl":"https://doi.org/10.1145/2993901.2993905","url":null,"abstract":"Visual analytics of eye tracking data is a common tool for evaluation studies across diverse fields. In this position paper we propose a novel user-driven interactive data exploration tool for understanding the characteristics of eye gaze movements and the changes in these behaviours over time. Eye tracking experiments generate multidimensional scan path data with sequential information. Many mathematical methods in the past have analysed one or a few of the attributes of the scan path data and derived attributes such as Area of Interest (AoI), statistical measures, geometry, domain specific features etc. In our work we are interested in visual analytics of one of the derived attributes of sequential data-the: AoI and the sequences of visits to these AoIs over time. In the case of static stimuli, such as images, or dynamic stimuli, like videos, having predefined or fixed AoIs is not an efficient way of analysing scan path patterns. The AoI of a user over a stimulus may evolve over time and hence determining the AoIs dynamically through temporal clustering could be a better method for analysing the eye gaze patterns. In this work we primarily focus on the challenges in analysis and visualization of the temporal evolution of AoIs. This paper discusses the existing methods, their shortcomings and scope for improvement by adopting visual analytics methods for event-based temporal data to the analysis of eye tracking data.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122484146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Väätäjä, Jari Varsaluoma, T. Heimonen, Katariina Tiitinen, Jaakko Hakulinen, M. Turunen, Harri Nieminen, Petri Ihantola
{"title":"Information Visualization Heuristics in Practical Expert Evaluation","authors":"H. Väätäjä, Jari Varsaluoma, T. Heimonen, Katariina Tiitinen, Jaakko Hakulinen, M. Turunen, Harri Nieminen, Petri Ihantola","doi":"10.1145/2993901.2993918","DOIUrl":"https://doi.org/10.1145/2993901.2993918","url":null,"abstract":"While traditional HCI heuristics can be used to find usability issues also from information visualization systems, specialized heuristics tailored for the information visualization (InfoViz) domain can be more effective and focus on the special characteristics of these systems. In this study, we describe the application of ten information visualization heuristics from prior research and their testing in practical heuristic evaluation. We found that the selected heuristics were useful with good coverage in our application case. However, based on our observations, we argue that interaction, veracity, and aesthetics related heuristics should be added to the previously used set. The lack of domain knowledge made the evaluators somewhat uneasy with their capability to carry out the investigation in-depth. We suggest to train domain experts with understanding of the data and application domain to carry out the evaluation to get insightful feedback beyond usability issues.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring Cognitive Load using Eye Tracking Technology in Visual Computing","authors":"Johannes Zagermann, Ulrike Pfeil, Harald Reiterer","doi":"10.1145/2993901.2993908","DOIUrl":"https://doi.org/10.1145/2993901.2993908","url":null,"abstract":"In this position paper we encourage the use of eye tracking measurements to investigate users' cognitive load while interacting with a system. We start with an overview of how eye movements can be interpreted to provide insight about cognitive processes and present a descriptive model representing the relations of eye movements and cognitive load. Then, we discuss how specific characteristics of human-computer interaction (HCI) interfere with the model and impede the application of eye tracking data to measure cognitive load in visual computing. As a result, we present a refined model, embedding the characteristics of HCI into the relation of eye tracking data and cognitive load. Based on this, we argue that eye tracking should be considered as a valuable instrument to analyze cognitive processes in visual computing and suggest future research directions to tackle outstanding issues.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123751908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Empire Built On Sand: Reexamining What We Think We Know About Visualization","authors":"Robert Kosara","doi":"10.1145/2993901.2993909","DOIUrl":"https://doi.org/10.1145/2993901.2993909","url":null,"abstract":"If we were to design Information Visualization from scratch, we would start with the basics: understand the principles of perception, test how they apply to different data encodings, build up those encodings to see if the principles still apply, etc. Instead, visualization was created from the other end: by building visual displays without an idea of how or if they worked, and then finding the relevant perceptual and other basics here and there.\u0000 This approach has the problem that we end up with a very patchy understanding of the foundations of our field. More than that, there is a good amount of unproven assumptions, aesthetic judgments, etc. mixed in with the evidence. We often don't even realize how much we rely on the latter, and can't easily identify them because they have been so deeply incorporated into the fabric of our field.\u0000 In this paper, I attempt to tease apart what we know and what we only think we know, using a few examples. The goal is to point out specific gaps in our knowledge, and to encourage researchers in the field to start questioning the underlying assumptions. Some of them are probably sound and will hold up to scrutiny. But some of them will not. We need to find out which is which and systematically build up a better foundation for our field. If we intend to develop ever more and better techniques and systems, we can't keep ignoring the base, or it will all come tumbling down sooner or later.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126998451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of Visualization by Critiques","authors":"R. Brath, E. Banissi","doi":"10.1145/2993901.2993904","DOIUrl":"https://doi.org/10.1145/2993901.2993904","url":null,"abstract":"In this position paper, we extend design critiques as a form of evaluation to visualization, specifically focusing on unique qualities of critiques that are different than other types of evaluation by inspection, such as heuristic evaluation, models, reviews or written criticism. Critiques can be used to address a broader scope and context of issues than other inspection techniques; and utilize bi-direction dialogue with multiple critics, including non-visualization critics.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124289720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Action Design Research and Visualization Design","authors":"Nina McCurdy, J. Dykes, Miriah D. Meyer","doi":"10.1145/2993901.2993916","DOIUrl":"https://doi.org/10.1145/2993901.2993916","url":null,"abstract":"In applied visualization research, artifacts are shaped by a series of small design decisions, many of which are evaluated quickly and informally via methods that often go unreported and unverified. Such design decisions are influenced not only by visualization theory, but also by the people and context of the research. While existing applied visualization models support a level of reliability throughout the design process, they fail to explicitly account for the influence of the research context in shaping the resulting design artifacts. In this work, we look to action design research (ADR) for insight into addressing this issue. In particular, ADR offers a framework along with a set of guiding principles for navigating and capitalizing on the disruptive, subjective, human-centered nature of applied design work, while aiming to ensure reliability of the process and design, and emphasizing opportunities for conducting research. We explore the utility of ADR in increasing the reliability of applied visualization design research by: describing ADR in the language and constructs developed within the visualization community; comparing ADR to existing visualization methodologies; and analyzing a recent design study retrospectively through the lens of ADR's framework and principles.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117216632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Looking at the Representations in our Mind: Measuring Mental Models of Information Visualizations","authors":"E. Mayr, Günther Schreder, M. Smuc, F. Windhager","doi":"10.1145/2993901.2993914","DOIUrl":"https://doi.org/10.1145/2993901.2993914","url":null,"abstract":"Users of information visualization systems build up internal representations of the displayed information and the system --mental models -- and constantly update them during interaction with the system. Though this theoretical approach was postulated as promising for information visualization, measures for empirical studies are missing. In this paper, we present different measures and evaluation procedures that have been developed for the assessment of mental models in other domains and discuss their suitability for the evaluation of internal and external representations in information visualization.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132833460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Visualization Sets: Trade-offs Between Local Effectiveness and Global Consistency","authors":"Zening Qu, J. Hullman","doi":"10.1145/2993901.2993910","DOIUrl":"https://doi.org/10.1145/2993901.2993910","url":null,"abstract":"Evaluation criteria like expressiveness and effectiveness favor optimal use of space and visual encoding channels in a single visualization. However, individually optimized views may be inconsistent with one another when presented as a set in rec-ommender systems and narrative visualizations. For example, two visualizations might use very similar color palettes for different data fields, or they might render the same field but in different scales. These inconsistencies in visualization sets can cause interpretation errors and increase the cognitive load on viewers trying to analyze a set of visualizations. We propose two high-level principles for evaluating visualization set consistency: (1) the same fields should be presented in the same way, (2) different fields should be presented differently. These two principles are operationalized as a set of constraints for common visual encoding channels (x, y, color, size, and shape) to enable automated visualization set evaluation. To balance global (visualization set) consistency and local (single visualization) effectiveness, trade-offs in space and visual encodings have to be made. We devise an effectiveness preservation score to guide the selection of which conflicts to surface and potentially revise for sets of quantitative and ordinal encodings and a palette resource allocation mechanism for nominal encodings.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127777295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Cushing, Evan Hayduk, Jerilyn Walley, Kirsten M. Winters, D. Lach, Michael Bailey, Christoph K. Thomas, S. Stafford
{"title":"Which visualizations work, for what purpose, for whom?: evaluating visualizations of terrestrial and aquatic systems","authors":"J. Cushing, Evan Hayduk, Jerilyn Walley, Kirsten M. Winters, D. Lach, Michael Bailey, Christoph K. Thomas, S. Stafford","doi":"10.1145/2442576.2442579","DOIUrl":"https://doi.org/10.1145/2442576.2442579","url":null,"abstract":"A need for better ecology visualization tools is well documented, and development of these is underway, including our own NSF funded Visualization of Terrestrial and Aquatic Systems (VISTAS) project, now beginning its second of four years. VISTAS' goal is not only to devise visualizations that help ecologists in research and in communicating that research, but also to evaluate the visualizations and software. Thus, we ask \"which visualizations work, for what purpose, and for which audiences,\" and our project involves equal participation of ecologists, computer scientists, and social scientists. We have begun to study visualization use by ecologists, assessed some existing software products, and implemented a prototype. This position paper reports how we apply social science methods in establishing context for VISTAS' evaluation and development. We describe our initial surveys of ecologists and ecology journals to determine current visualization use, outline our visualization evaluation strategies, and in conclusion pose questions critical to the evaluation, deployment, and adoption of VISTAS and VISTAS-like visualizations and software.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115374568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}