From byproduct to design factor: on validating the interpretation of process indicators based on log data

IF 2.6 Q1 EDUCATION & EDUCATIONAL RESEARCH
Goldhammer, Frank, Hahnel, Carolin, Kroehne, Ulf, Zehner, Fabian
{"title":"From byproduct to design factor: on validating the interpretation of process indicators based on log data","authors":"Goldhammer, Frank, Hahnel, Carolin, Kroehne, Ulf, Zehner, Fabian","doi":"10.1186/s40536-021-00113-5","DOIUrl":null,"url":null,"abstract":"<p>International large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers’ interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended uses and interpretations of process indicators require validation, which here means a theoretical and/or empirical justification that inferences about (latent) attributes of the test-taker’s work process are valid. This article reviews and synthesizes measurement concepts from various areas, including the standard assessment paradigm, the continuous assessment approach, the evidence-centered design (ECD) framework, and test validation. Based on this synthesis, we address the questions of how to ensure the valid interpretation of process indicators by means of an evidence-centered design of the task situation, and how to empirically challenge the intended interpretation of process indicators by developing and implementing correlational and/or experimental validation strategies. For this purpose, we explicate the process of reasoning from log data to low-level features and process indicators as the outcome of evidence identification. In this process, contextualizing information from log data is essential in order to reduce interpretative ambiguities regarding the derived process indicators. Finally, we show that empirical validation strategies can be adapted from classical approaches investigating the nomothetic span and construct representation. Two worked examples illustrate possible validation strategies for the design phase of measurements and their empirical evaluation.</p>","PeriodicalId":37009,"journal":{"name":"Large-Scale Assessments in Education","volume":"21 5","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Large-Scale Assessments in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s40536-021-00113-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 13

Abstract

International large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers’ interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended uses and interpretations of process indicators require validation, which here means a theoretical and/or empirical justification that inferences about (latent) attributes of the test-taker’s work process are valid. This article reviews and synthesizes measurement concepts from various areas, including the standard assessment paradigm, the continuous assessment approach, the evidence-centered design (ECD) framework, and test validation. Based on this synthesis, we address the questions of how to ensure the valid interpretation of process indicators by means of an evidence-centered design of the task situation, and how to empirically challenge the intended interpretation of process indicators by developing and implementing correlational and/or experimental validation strategies. For this purpose, we explicate the process of reasoning from log data to low-level features and process indicators as the outcome of evidence identification. In this process, contextualizing information from log data is essential in order to reduce interpretative ambiguities regarding the derived process indicators. Finally, we show that empirical validation strategies can be adapted from classical approaches investigating the nomothetic span and construct representation. Two worked examples illustrate possible validation strategies for the design phase of measurements and their empirical evaluation.

从副产品到设计因素:基于测井数据验证工艺指标的解释
国际上大规模的评估如PISA或PIAAC已经开始为日志数据提供公开或科学的使用文件;即考生与考评系统交互的事件、事件相关属性和时间戳。日志数据和由此产生的过程指示器可用于多种目的。然而,过程指标的预期用途和解释需要验证,这意味着对测试者工作过程(潜在)属性的推断是有效的理论和/或经验证明。本文回顾并综合了来自不同领域的度量概念,包括标准评估范例、持续评估方法、以证据为中心的设计(ECD)框架和测试验证。基于这种综合,我们解决了如何通过以证据为中心的任务情境设计来确保过程指标的有效解释的问题,以及如何通过开发和实施相关和/或实验验证策略来经验地挑战过程指标的预期解释的问题。为此,我们阐述了从测井数据到底层特征和过程指标的推理过程,作为证据识别的结果。在这一过程中,为了减少对衍生过程指标的解释歧义,将来自测井数据的信息上下文化是必不可少的。最后,我们证明了经验验证策略可以适应于研究本体广度和结构表征的经典方法。两个工作实例说明了可能的验证策略的设计阶段的测量和他们的经验评价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Large-Scale Assessments in Education
Large-Scale Assessments in Education Social Sciences-Education
CiteScore
4.30
自引率
6.50%
发文量
16
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信