Evaluating convergence between two data visualization literacy assessments.

IF 3.4 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Erik Brockbank, Arnav Verma, Hannah Lloyd, Holly Huey, Lace Padilla, Judith E Fan
{"title":"Evaluating convergence between two data visualization literacy assessments.","authors":"Erik Brockbank, Arnav Verma, Hannah Lloyd, Holly Huey, Lace Padilla, Judith E Fan","doi":"10.1186/s41235-025-00622-9","DOIUrl":null,"url":null,"abstract":"<p><p>Data visualizations play a crucial role in communicating patterns in quantitative data, making data visualization literacy a key target of STEM education. However, it is currently unclear to what degree different assessments of data visualization literacy measure the same underlying constructs. Here, we administered two widely used graph comprehension assessments (Galesic and Garcia-Retamero in Med Dec Mak 31:444-457, 2011; Lee et al. in IEEE Trans Vis Comput Graph 235:51-560, 2016) to both a university-based convenience sample and a demographically representative sample of adult participants in the USA (N=1,113). Our analysis of individual variability in test performance suggests that overall scores are correlated between assessments and associated with the amount of prior coursework in mathematics. However, further exploration of individual error patterns suggests that these assessments probe somewhat distinct components of data visualization literacy, and we do not find evidence that these components correspond to the categories that guided the design of either test (e.g., questions that require retrieving values rather than making comparisons). Together, these findings suggest opportunities for development of more comprehensive assessments of data visualization literacy that are organized by components that better account for detailed behavioral patterns.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"15"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11972256/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-025-00622-9","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Data visualizations play a crucial role in communicating patterns in quantitative data, making data visualization literacy a key target of STEM education. However, it is currently unclear to what degree different assessments of data visualization literacy measure the same underlying constructs. Here, we administered two widely used graph comprehension assessments (Galesic and Garcia-Retamero in Med Dec Mak 31:444-457, 2011; Lee et al. in IEEE Trans Vis Comput Graph 235:51-560, 2016) to both a university-based convenience sample and a demographically representative sample of adult participants in the USA (N=1,113). Our analysis of individual variability in test performance suggests that overall scores are correlated between assessments and associated with the amount of prior coursework in mathematics. However, further exploration of individual error patterns suggests that these assessments probe somewhat distinct components of data visualization literacy, and we do not find evidence that these components correspond to the categories that guided the design of either test (e.g., questions that require retrieving values rather than making comparisons). Together, these findings suggest opportunities for development of more comprehensive assessments of data visualization literacy that are organized by components that better account for detailed behavioral patterns.

评估两种数据可视化素养评估之间的收敛性。
数据可视化在交流定量数据的模式方面发挥着至关重要的作用,因此数据可视化素养成为 STEM 教育的一个重要目标。然而,目前还不清楚不同的数据可视化素养评估在多大程度上衡量了相同的基本构架。在此,我们对以大学为基础的便利样本和具有人口统计学代表性的美国成人参与者样本(N=1113)进行了两项广泛使用的图形理解评估(Galesic 和 Garcia-Retamero 发表于 Med Dec Mak 31:444-457, 2011;Lee 等人发表于 IEEE Trans Vis Comput Graph 235:51-560, 2016)。我们对测试成绩的个体差异进行的分析表明,不同评估之间的总分是相关的,并且与之前的数学课程量有关。然而,对个体错误模式的进一步探索表明,这些评估探究的是数据可视化素养中一些不同的组成部分,而且我们没有发现证据表明这些组成部分与指导这两种测试设计的类别相对应(例如,需要检索值而不是进行比较的问题)。总之,这些发现为开发更全面的数据可视化素养评估提供了机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.80
自引率
7.30%
发文量
96
审稿时长
25 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信