2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)最新文献

筛选
英文 中文
How to evaluate data visualizations across different levels of understanding 如何在不同的理解水平上评估数据可视化
Alyxander Burns, Cindy Xiong, S. Franconeri, A. Cairo, Narges Mahyar
{"title":"How to evaluate data visualizations across different levels of understanding","authors":"Alyxander Burns, Cindy Xiong, S. Franconeri, A. Cairo, Narges Mahyar","doi":"10.1109/BELIV51497.2020.00010","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00010","url":null,"abstract":"Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. A well-designed visualization should support each of these levels of understanding. We diagnose levels of understanding of visualized data by adapting Bloom’s taxonomy, a common framework from the education literature. We describe each level of the framework and provide examples for how it can be applied to evaluate the efficacy of data visualizations along six levels of knowledge acquisition - knowledge, comprehension, application, analysis, synthesis, and evaluation. We present three case studies showing that this framework expands on existing methods to comprehensively measure how a visualization design facilitates a viewer’s understanding of visualizations. Although Bloom’s original taxonomy suggests a strong hierarchical structure for some domains, we found few examples of dependent relationships between performance at different levels for our three case studies. If this level-independence holds across new tested visualizations, the taxonomy could serve to inspire more targeted evaluations of levels of understanding that are relevant to a communication goal.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124392897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Data-First Visualization Design Studies 数据优先的可视化设计研究
Michael Oppermann, T. Munzner
{"title":"Data-First Visualization Design Studies","authors":"Michael Oppermann, T. Munzner","doi":"10.1109/BELIV51497.2020.00016","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00016","url":null,"abstract":"We introduce the notion of a data-first design study which is triggered by the acquisition of real-world data instead of specific stakeholder analysis questions. We propose an adaptation of the design study methodology framework to provide practical guidance and to aid transferability to other data-first design processes. We discuss opportunities and risks by reflecting on two of our own data-first design studies. We review 64 previous design studies and identify 16 of them as edge cases with characteristics that may indicate a data-first design process in action.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131802515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems 微条目:随着时间的推移鼓励对交互数据系统的心理模型进行更深入的评估
Jeremy E. Block, E. Ragan
{"title":"Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems","authors":"Jeremy E. Block, E. Ragan","doi":"10.1109/BELIV51497.2020.00012","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00012","url":null,"abstract":"Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users’ mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user’s final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time to help track the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user’s conceptualization process while also helping guide users to their own discoveries.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116664916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
What Do We Actually Learn from Evaluations in the “Heroic Era” of Visualization? : Position Paper 我们究竟从可视化“英雄时代”的评价中学到了什么?:立场文件
M. Correll
{"title":"What Do We Actually Learn from Evaluations in the “Heroic Era” of Visualization? : Position Paper","authors":"M. Correll","doi":"10.1109/BELIV51497.2020.00013","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00013","url":null,"abstract":"We often point to the relative increase in the amount and sophistication of evaluations of visualization systems versus the earliest days of the field as evidence that we are maturing as a field. I am not so convinced. In particular, I feel that evaluations of visualizations, as they are ordinarily performed in the field or asked for by reviewers, fail to tell us very much that is useful or transferable about visualization systems, regardless of the statistical rigor or ecological validity of the evaluation. Through a series of thought experiments, I show how our current conceptions of visualization evaluations can be incomplete, capricious, or useless for the goal of furthering the field, more in line with the “heroic age” of medical science than the rigorous evidence-based field we might aspire to be. I conclude by suggesting that our models for designing evaluations, and our priorities as a field, should be revisited.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132545603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards Identification and Mitigation of Task-Based Challenges in Comparative Visualization Studies 比较可视化研究中任务挑战的识别与缓解
Aditeya Pandey, Uzma Haque Syeda, M. Borkin
{"title":"Towards Identification and Mitigation of Task-Based Challenges in Comparative Visualization Studies","authors":"Aditeya Pandey, Uzma Haque Syeda, M. Borkin","doi":"10.1109/BELIV51497.2020.00014","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00014","url":null,"abstract":"The effectiveness of a visualization technique is dependent on how well it supports the tasks or goals of an end-user. To measure the effectiveness of a visualization technique, researchers often use a comparative study design. In a comparative study, two or more visualization techniques are compared over a set of tasks and commonly measure human performance in terms of task accuracy and completion time. Despite the critical role of tasks in comparative studies, the current lack of guidance in existing literature on best practices for task selection and communication of research results in evaluation studies is problematic. In this work, we systematically identify and curate the task-based challenges of comparative studies by reviewing existing visualization literature on the topic. Furthermore, for each of the presented challenges we discuss the potential threats to validity for a comparative study. The challenges discussed in this paper are further backed by evidence identified in a detailed survey of comparative tree visualization studies. Finally, we recommend best practices from personal experience and the surveyed tree visualization studies to provide guidelines for other researchers to mitigate the challenges. The survey data and a free copy of the paper is available at https://osf.io/g3btk/","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信