技术增强项目和模型数据不匹配

Q3 Social Sciences
Carol Eckerly, Yue Jia, Paul Jewsbury
{"title":"技术增强项目和模型数据不匹配","authors":"Carol Eckerly,&nbsp;Yue Jia,&nbsp;Paul Jewsbury","doi":"10.1002/ets2.12353","DOIUrl":null,"url":null,"abstract":"<p>Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.</p>","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":"2022 1","pages":"1-16"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12353","citationCount":"0","resultStr":"{\"title\":\"Technology-Enhanced Items and Model–Data Misfit\",\"authors\":\"Carol Eckerly,&nbsp;Yue Jia,&nbsp;Paul Jewsbury\",\"doi\":\"10.1002/ets2.12353\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.</p>\",\"PeriodicalId\":11972,\"journal\":{\"name\":\"ETS Research Report Series\",\"volume\":\"2022 1\",\"pages\":\"1-16\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ets2.12353\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ETS Research Report Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ets2.12353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

测试项目探索了技术增强项目与传统项目类型(例如,多项选择和构建反应项目)的使用,作为用项目反应理论(IRT)建模的潜在构建的测量证据。在本报告中,我们将讨论将IRT模型应用于特定类型的自适应测试(称为分支项)时的注意事项。在分支形式下,所有的考生被分配到一个共同的问题,下一个问题的分配依赖于通过确定性规则对第一个问题的回答。此外,两个阶段的项目作为一个多同构项目一起得分。给出了真实和模拟的例子来讨论在分支项目中应用IRT模型所面临的挑战。我们发现,当分支项目被评分为多分项并使用广义部分信用模型建模时,模型-数据不匹配很可能发生,并且路由组件的识别与后续组件的识别之间的关系似乎驱动了不匹配。我们总结了经验教训,并为在未来的评估中使用分支项目提供了建议的指导方针和考虑因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Technology-Enhanced Items and Model–Data Misfit

Technology-Enhanced Items and Model–Data Misfit

Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet referred to as a branching item. Under the branching format, all test takers are assigned to a common question, and the assignment of the next question relies on the response to the first question through deterministic rules. In addition, the items at both stages are scored together as one polytomous item. Real and simulated examples are provided to discuss challenges in applying IRT models to branching items. We find that model–data misfit is likely to occur when branching items are scored as polytomous items and modeled with the generalized partial credit model and that the relationship between the discrimination of the routing component and the discriminations of the subsequent components seemed to drive the misfit. We conclude with lessons learned and provide suggested guidelines and considerations for operationalizing the use of branching items in future assessments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ETS Research Report Series
ETS Research Report Series Social Sciences-Education
CiteScore
1.20
自引率
0.00%
发文量
17
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信