{"title":"The Garden of Forking Paths in Visualization: A Design Space for Reliable Exploratory Visual Analytics : Position Paper","authors":"Xiaoying Pu, Matthew Kay","doi":"10.1109/BELIV.2018.8634103","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634103","url":null,"abstract":"Tukey emphasized decades ago that taking exploratory findings as confirmatory is “destructively foolish”. We reframe recent conversations about the reliability of results from exploratory visual analytics—such as the multiple comparisons problem—in terms of Gelman and Loken’s garden of forking paths to lay out a design space for addressing the forking paths problem in visual analytics. This design space encompasses existing approaches to address the forking paths problem (multiple comparison correction) as well as solutions that have not been applied to exploratory visual analytics (regularization). We also discuss how perceptual bias correction techniques may be used to correct biases induced in analysts’ understanding of their data due to the forking paths problem, and outline how this problem can be cast as a threat to validity within Munzner’s Nested Model of visualization design. Finally, we suggest paper review guidelines to encourage reviewers to consider the forking paths problem when evaluating future designs of visual analytics tools.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Case for Cognitive Models in Visualization Research : Position paper","authors":"Lace M. K. Padilla","doi":"10.1109/BELIV.2018.8634267","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634267","url":null,"abstract":"The visualization community has seen a rise in the adoption of user studies. Empirical user studies systematically test the assumptions that we make about how visualizations can help or hinder viewers’ performance of tasks. Although the increase in user studies is encouraging, it is vital that research on human reasoning with visualizations be grounded in an understanding of how the mind functions. Previously, there were no sufficient models that illustrate the process of decision-making with visualizations. However, Padilla et al. [41] recently proposed an integrative model for decision-making with visualizations, which expands on modern theories of visualization cognition and decision-making. In this paper, we provide insights into how cognitive models can accelerate innovation, improve validity, and facilitate replication efforts, which have yet to be thoroughly discussed in the visualization community. To do this, we offer a compact overview of the cognitive science of decision-making with visualizations for the visualization community, using the Padilla et al. [41] cognitive model as a guiding framework. By detailing examples of visualization research that illustrate each component of the model, this paper offers novel insights into how visualization researchers can utilize a cognitive framework to guide their user studies. We provide practical examples of each component of the model from empirical studies of visualizations, along with visualization implications of each cognitive process, which have not been directly addressed in prior work. Finally, this work offers a case study in utilizing an understanding of human cognition to generate a novel solution to a visualization reasoning bias in the context of hurricane forecast track visualizations.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123923014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. El-Shehaly, Natasha Alvarado, Lynn McVey, R. Randell, M. Mamas, R. Ruddle
{"title":"From Taxonomy to Requirements: A Task Space Partitioning Approach","authors":"M. El-Shehaly, Natasha Alvarado, Lynn McVey, R. Randell, M. Mamas, R. Ruddle","doi":"10.1109/BELIV.2018.8634027","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634027","url":null,"abstract":"We present a taxonomy-driven approach to requirements specification in a large-scale project setting, drawing on our work to develop visualization dashboards for improving the quality of healthcare. Our aim is to overcome some of the limitations of the qualitative methods that are typically used for requirements analysis. When applied alone, methods like interviews fall short in identifying the full set of functionalities that a visualization system should support. We present a five-stage pipeline to structure user task elicitation and analysis around well-established taxonomic dimensions, and make the following contributions: (i) criteria for selecting dimensions from the large body of task taxonomies in the literature,, (ii) use of three particular dimensions (granularity, type cardinality and target) to create materials for a requirements analysis workshop with domain experts, (iii) a method for characterizing the task space that was produced by the experts in the workshop, (iv) a decision tree that partitions that space and maps it to visualization design alternatives, and (v) validating our approach by testing the decision tree against new tasks that collected through interviews with further domain experts.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124973216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to Evaluate an Evaluation Study? Comparing and Contrasting Practices in Vis with Those of Other Disciplines : Position Paper","authors":"Anamaria Crisan, Madison Elliott","doi":"10.1109/BELIV.2018.8634420","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634420","url":null,"abstract":"Evaluative practices within vis research are not routinely compared to those of psychology, sociology, or other areas of empirical study, leaving vis vulnerable to the replicability crisis that has embroiled scientific research more generally. In this position paper, we compare contemporary vis evaluative practices against those in those other disciplines, and make concrete recommendations as to how vis evaluative practice can be improved through the use of quantitative, qualitative, and mixed research methods. We summarize our discussion and recommendations as a checklist, that we intend to be used a resource for vis researchers conducting evaluative studies, and for reviewers evaluating the merits of such studies.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133877540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Characterizing Domain Experts as a User Group","authors":"Yuet Ling Wong, K. Madhavan, N. Elmqvist","doi":"10.1109/BELIV.2018.8634026","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634026","url":null,"abstract":"Visualization is an inherently interdisciplinary research area: visualization researchers are always visualizing other people’s data. While recent trends in the field has turned toward a more inclusive audience, particularly within the topic of “visualization for the masses”, the traditional user group for our tools have been the domain expert: people who are experts in a specific professional domain where they want to apply visualization and analytics, but who often lack high literacy, training, and motivation in visualization and visual analytics. Such domain experts want to opportunistically reap the benefits of visualization, but have no patience for long training, poor interaction or visual design, or complex displays. While domain experts are a familiar user group, surprisingly little effort has been devoted towards characterizing them for both design and evaluation purposes. To help the visualization community better understand this specific group, in this paper, we describe the characteristics of domain experts, discuss existing examples designed for them, and propose possible guidelines to facilitate the design process. We believe this discussion will help visualization researchers better understand this group and uncover more research opportunities.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"47 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113957802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Heuristic Evaluation in Visualization: An Empirical Study : Position paper","authors":"B. Santos, S. Silva, Paulo Dias","doi":"10.1109/BELIV.2018.8634108","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634108","url":null,"abstract":"Heuristic evaluation is a usability inspection method that has been adapted to evaluate visualization applications through the development of specific sets of heuristics. This paper presents an empirical study meant to assess the capacity of the method to anticipate the usability issues noticed by users when using a visualization application. The potential usability problems identified by 20 evaluators were compared with the issues found for the same application by 46 users through a usability test, as well as with the fixes recommended by the experimenters observing those users during the test. Results suggest that using some heuristics may have elicited potential problems that none of the users noticed while using the application; on the other hand, users encountered unpredicted usability issues.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"IE-29 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114117314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
André Calero Valdez, A. Schaar, J. Hildebrandt, M. Ziefle
{"title":"Requirements for Reproducibility of Research in Situational and Spatio-Temporal Visualization : Position Paper","authors":"André Calero Valdez, A. Schaar, J. Hildebrandt, M. Ziefle","doi":"10.1109/BELIV.2018.8634150","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634150","url":null,"abstract":"Research on spatio-temporal visualization is driven by the development of novel visualization and data aggregation techniques. Yet, only little research is conducted on the systematic evaluation of such visualizations. Evaluation of such technology is often conducted in real-life settings and thus lacks fundamental requirements for laboratory-based replication. Replication requires other researchers to independently conduct their own experiments to verify your results. In this position paper, we discuss the requirements for replication studies of spatio-temporal visualization systems. These requirements are often impossible to achieve for highly contextual visualizations such as spatio-temporal visualizations. We argue that reproducibility—allowing other researchers to validate your findings from your data—is a better aim for highly contextual visualizations. We provide a sample workflow to ensure reproducibility for spatio-temporal visualization and discuss its implications.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Designing Unbiased Replication Studies in Information Visualization","authors":"P. Sukumar, Ronald A. Metoyer","doi":"10.1109/BELIV.2018.8634261","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634261","url":null,"abstract":"Experimenter bias and expectancy effects have been well studied in the social sciences and even in human-computer interaction. They refer to the nonideal study-design choices made by experimenters which can unfairly influence the outcomes of their studies. While these biases need to be considered when designing any empirical study, they can be particularly significant in the context of replication studies which can stray from the studies being replicated in only a few admissible ways. Although there are general guidelines for making valid, unbiased choices in each of the several steps in experimental design, making such choices when conducting replication studies has not been well explored.We reviewed 16 replication studies in information visualization published in four top venues between 2008 to present to characterize how the study designs of the replication studies differed from those of the studies they replicated. We present our characterization categories which include the prevalence of crowdsourcing, and the commonly-found replication types and study-design differences. We draw guidelines based on these categories towards helping researchers make meaningful and unbiased decisions when designing replication studies. Our paper presents the first steps in gaining a larger understanding of this topic and contributes to the ongoing efforts of encouraging researchers to conduct and publish more replication studies in information visualization.","PeriodicalId":269472,"journal":{"name":"2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127597579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}