Erik Brockbank, Arnav Verma, Hannah Lloyd, Holly Huey, Lace Padilla, Judith E Fan
{"title":"Evaluating convergence between two data visualization literacy assessments.","authors":"Erik Brockbank, Arnav Verma, Hannah Lloyd, Holly Huey, Lace Padilla, Judith E Fan","doi":"10.1186/s41235-025-00622-9","DOIUrl":null,"url":null,"abstract":"<p><p>Data visualizations play a crucial role in communicating patterns in quantitative data, making data visualization literacy a key target of STEM education. However, it is currently unclear to what degree different assessments of data visualization literacy measure the same underlying constructs. Here, we administered two widely used graph comprehension assessments (Galesic and Garcia-Retamero in Med Dec Mak 31:444-457, 2011; Lee et al. in IEEE Trans Vis Comput Graph 235:51-560, 2016) to both a university-based convenience sample and a demographically representative sample of adult participants in the USA (N=1,113). Our analysis of individual variability in test performance suggests that overall scores are correlated between assessments and associated with the amount of prior coursework in mathematics. However, further exploration of individual error patterns suggests that these assessments probe somewhat distinct components of data visualization literacy, and we do not find evidence that these components correspond to the categories that guided the design of either test (e.g., questions that require retrieving values rather than making comparisons). Together, these findings suggest opportunities for development of more comprehensive assessments of data visualization literacy that are organized by components that better account for detailed behavioral patterns.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"15"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11972256/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-025-00622-9","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Data visualizations play a crucial role in communicating patterns in quantitative data, making data visualization literacy a key target of STEM education. However, it is currently unclear to what degree different assessments of data visualization literacy measure the same underlying constructs. Here, we administered two widely used graph comprehension assessments (Galesic and Garcia-Retamero in Med Dec Mak 31:444-457, 2011; Lee et al. in IEEE Trans Vis Comput Graph 235:51-560, 2016) to both a university-based convenience sample and a demographically representative sample of adult participants in the USA (N=1,113). Our analysis of individual variability in test performance suggests that overall scores are correlated between assessments and associated with the amount of prior coursework in mathematics. However, further exploration of individual error patterns suggests that these assessments probe somewhat distinct components of data visualization literacy, and we do not find evidence that these components correspond to the categories that guided the design of either test (e.g., questions that require retrieving values rather than making comparisons). Together, these findings suggest opportunities for development of more comprehensive assessments of data visualization literacy that are organized by components that better account for detailed behavioral patterns.
数据可视化在交流定量数据的模式方面发挥着至关重要的作用,因此数据可视化素养成为 STEM 教育的一个重要目标。然而,目前还不清楚不同的数据可视化素养评估在多大程度上衡量了相同的基本构架。在此,我们对以大学为基础的便利样本和具有人口统计学代表性的美国成人参与者样本(N=1113)进行了两项广泛使用的图形理解评估(Galesic 和 Garcia-Retamero 发表于 Med Dec Mak 31:444-457, 2011;Lee 等人发表于 IEEE Trans Vis Comput Graph 235:51-560, 2016)。我们对测试成绩的个体差异进行的分析表明,不同评估之间的总分是相关的,并且与之前的数学课程量有关。然而,对个体错误模式的进一步探索表明,这些评估探究的是数据可视化素养中一些不同的组成部分,而且我们没有发现证据表明这些组成部分与指导这两种测试设计的类别相对应(例如,需要检索值而不是进行比较的问题)。总之,这些发现为开发更全面的数据可视化素养评估提供了机会。