International Journal of Testing最新文献

筛选
英文 中文
Generating reading comprehension items using automated processes 使用自动化过程生成阅读理解项目
IF 1.7
International Journal of Testing Pub Date : 2022-10-02 DOI: 10.1080/15305058.2022.2070755
Jinnie Shin, Mark J. Gierl
{"title":"Generating reading comprehension items using automated processes","authors":"Jinnie Shin, Mark J. Gierl","doi":"10.1080/15305058.2022.2070755","DOIUrl":"https://doi.org/10.1080/15305058.2022.2070755","url":null,"abstract":"Abstract Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using many different item formats, fill-in-the-blank remains one of the most common when the goal is to measure inferential knowledge. Currently, the item development process used to create fill-in-the-blank reading comprehension items is time-consuming and expensive. Hence, the purpose of the study is to introduce a new systematic method for generating fill-in-the-blank reading comprehension items using an item modeling approach. We describe the use of different unsupervised learning methods that can be paired with natural language processing techniques to identify the salient item models within existing texts. To demonstrate the capacity of our method, 1,013 test items were generated from 100 input texts taken from fill-in-the-blank reading comprehension items used on a high-stakes college entrance exam in South Korea. Our validation results indicated that the generated items produced higher semantic similarities between the item options while depicting little to no syntactic differences with the traditionally written test items.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45142900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the writing performance of educationally at-risk examinees using technology 使用技术调查受教育风险考生的写作表现
IF 1.7
International Journal of Testing Pub Date : 2022-10-02 DOI: 10.1080/15305058.2022.2050734
Mo Zhang, S. Sinharay
{"title":"Investigating the writing performance of educationally at-risk examinees using technology","authors":"Mo Zhang, S. Sinharay","doi":"10.1080/15305058.2022.2050734","DOIUrl":"https://doi.org/10.1080/15305058.2022.2050734","url":null,"abstract":"Abstract This article demonstrates how recent advances in technology allow fine-grained analyses of candidate-produced essays, thus providing a deeper insight on writing performance. We examined how essay features, automatically extracted using natural language processing and keystroke logging techniques, can predict various performance measures using data from a large-scale and high-stakes assessment for awarding high-school equivalency diploma. The features that are the most predictive of writing proficiency and broader academic success were identified and interpreted. The suggested methodology promises to be practically useful because it has the potential to point to specific writing skills that are important for improving essay writing and academic performance for educationally at-risk adult populations like the one considered in this article.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48678854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technology-based assessments: Novel approaches to testing in organizational, psychological, and educational settings 基于技术的评估:在组织、心理和教育环境中进行测试的新方法
IF 1.7
International Journal of Testing Pub Date : 2022-10-02 DOI: 10.1080/15305058.2022.2143173
Christopher D. Nye
{"title":"Technology-based assessments: Novel approaches to testing in organizational, psychological, and educational settings","authors":"Christopher D. Nye","doi":"10.1080/15305058.2022.2143173","DOIUrl":"https://doi.org/10.1080/15305058.2022.2143173","url":null,"abstract":"each to individuals in research evaluating their implications and utility for psychological assessment. the of on topic, the Journal of solicited addressing the use of technology for assessments in organizational, psychological, or educational settings. purpose of inviting these was to promote research on this topic and to address important issues related to the development and use of high-qual-ity,","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45818419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A psychometric view of technology-based assessments 基于技术的评估的心理测量学观点
IF 1.7
International Journal of Testing Pub Date : 2022-10-02 DOI: 10.1080/15305058.2022.2070757
Gloria Liou, Cavan V. Bonner, L. Tay
{"title":"A psychometric view of technology-based assessments","authors":"Gloria Liou, Cavan V. Bonner, L. Tay","doi":"10.1080/15305058.2022.2070757","DOIUrl":"https://doi.org/10.1080/15305058.2022.2070757","url":null,"abstract":"Abstract With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are appropriate. We focus on three technological advances—the use of organic data for psychological assessments, the application of machine learning algorithms, and adaptive and gamified assessments—and review how the concepts of validity, reliability, and measurement bias may apply in particular ways within those areas. This provides direction for researchers and practitioners to advance the rigor of technology-based assessments from a psychometric perspective.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45801075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining patterns of omitted responses in a large-scale English language proficiency test 大规模英语水平测试中省略回答模式的检验
IF 1.7
International Journal of Testing Pub Date : 2022-05-12 DOI: 10.1080/15305058.2022.2070756
Merve Sarac, E. Loken
{"title":"Examining patterns of omitted responses in a large-scale English language proficiency test","authors":"Merve Sarac, E. Loken","doi":"10.1080/15305058.2022.2070756","DOIUrl":"https://doi.org/10.1080/15305058.2022.2070756","url":null,"abstract":"Abstract This study is an exploratory analysis of examinee behavior in a large-scale language proficiency test. Despite a number-right scoring system with no penalty for guessing, we found that 16% of examinees omitted at least one answer and that women were more likely than men to omit answers. Item-response theory analyses treating the omitted responses as missing rather than wrong showed that examinees had underperformed by skipping the answers, with a greater underperformance among more able participants. An analysis of omitted answer patterns showed that reading passage items were most likely to be omitted, and that native language-translation items were least likely to be omitted. We hypothesized that since reading passage items were most tempting to skip, then among examinees who did answer every question there might be a tendency to guess at these items. Using cluster analyses, we found that underperformance on the reading items was more likely than underperformance on the non-reading passage items. In large-scale operational tests, examinees must know the optimal strategy for taking the test. Test developers must also understand how examinee behavior might impact the validity of score interpretations.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41620786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using item response theory to understand the effects of scale contextualization: An illustration using decision making style scales 运用项目反应理论了解量表情境化的影响:以决策风格量表为例
IF 1.7
International Journal of Testing Pub Date : 2022-03-15 DOI: 10.1080/15305058.2022.2047692
Nathaniel M. Voss, Cassandra Chlevin-Thiele, Christopher J. Lake, Chi-Leigh Q. Warren
{"title":"Using item response theory to understand the effects of scale contextualization: An illustration using decision making style scales","authors":"Nathaniel M. Voss, Cassandra Chlevin-Thiele, Christopher J. Lake, Chi-Leigh Q. Warren","doi":"10.1080/15305058.2022.2047692","DOIUrl":"https://doi.org/10.1080/15305058.2022.2047692","url":null,"abstract":"Abstract The goal of this study was to extend research on scale contextualization (i.e., frame-of-reference effect) to the decision making styles construct, compare the effects of contextualization across three unique decision style scales, and examine the consequences of scale contextualization within an item response theory framework. Based on a mixed experimental design, data gathered from 661 university students indicated that contextualized scales yielded higher predictive validity, occasionally possessed psychometric properties better than the original measures, and that the effects of contextualization are somewhat scale-specific. These findings provide important insights for researchers and practitioners seeking to modify and adapt existing scales.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45802961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The development and validation of the Resilience Index 弹性指数的制定和验证
IF 1.7
International Journal of Testing Pub Date : 2022-02-25 DOI: 10.1080/15305058.2022.2036162
M. van Wyk, G. Lipinska, M. Henry, T. K. Phillips, P. E. van der Walt
{"title":"The development and validation of the Resilience Index","authors":"M. van Wyk, G. Lipinska, M. Henry, T. K. Phillips, P. E. van der Walt","doi":"10.1080/15305058.2022.2036162","DOIUrl":"https://doi.org/10.1080/15305058.2022.2036162","url":null,"abstract":"Abstract Resilience comprises various neurobiological, developmental, and psychosocial components. However, existing measures lack certain critical components, while having limited utility in low-to-middle-income settings. We aimed to develop a reliable and valid measure of resilience encompassing a broad range of components and that can be used across different income settings. We also set out to develop empirical cutoff scores for low, moderate, and high resilience. Results from 686 participants revealed the emergence of three components: positive affect (α = 0.879), early-life stability (α = 0.879), and stress mastery (α = 0.683). Convergent and incremental validity was confirmed using an existing resilience measure as the benchmark. Concurrent validity was also confirmed with significant negative correlations with measures of depression, anxiety, posttraumatic stress disorder, and sleep disruption. Finally, we successfully determined cutoff scores for low, moderate, and high resilience. Results confirm that the Resilience Index is a reliable and valid measure that can be utilized in both high- and low-to-middle-income settings.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45130282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evaluating group differences in online reading comprehension: The impact of item properties 评估在线阅读理解中的群体差异:项目属性的影响
IF 1.7
International Journal of Testing Pub Date : 2022-02-25 DOI: 10.1080/15305058.2022.2044821
H. Bulut, O. Bulut, Serkan Arıkan
{"title":"Evaluating group differences in online reading comprehension: The impact of item properties","authors":"H. Bulut, O. Bulut, Serkan Arıkan","doi":"10.1080/15305058.2022.2044821","DOIUrl":"https://doi.org/10.1080/15305058.2022.2044821","url":null,"abstract":"Abstract This study examined group differences in online reading comprehension (ORC) using student data from the 2016 administration of the Progress in International Reading Literacy Study (ePIRLS). An explanatory item response modeling approach was used to explore the effects of item properties (i.e., item format, text complexity, and cognitive complexity), student characteristics (i.e., gender and language groups), and their interactions on dichotomous and polytomous item responses. The results showed that female students outperform male students in ORC tasks and that the achievement difference between female and male students appears to change text complexity increases. Similarly, the cognitive complexity of the items seems to play a significant role in explaining the gender gap in ORC performance. Students who never (or sometimes) speak the test language at home particularly struggled with answering ORC tasks. The achievement gap between students who always (or almost always) speak the test language at home and those who never (or sometimes) speak the test language at home was larger for constructed-response items and items with higher cognitive complexity. Overall, the findings suggest that item properties could help understand performance differences between gender and language groups in ORC assessments.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42236674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An investigation of item, examinee, and country correlates of rapid guessing in PISA 项目、考生和国家在PISA快速猜测中的相关性调查
IF 1.7
International Journal of Testing Pub Date : 2022-02-09 DOI: 10.1080/15305058.2022.2036161
Joseph A. Rios, J. Soland
{"title":"An investigation of item, examinee, and country correlates of rapid guessing in PISA","authors":"Joseph A. Rios, J. Soland","doi":"10.1080/15305058.2022.2036161","DOIUrl":"https://doi.org/10.1080/15305058.2022.2036161","url":null,"abstract":"Abstract The objective of the present study was to investigate item-, examinee-, and country-level correlates of rapid guessing (RG) in the context of the 2018 PISA science assessment. Analyzing data from 267,148 examinees across 71 countries showed that over 50% of examinees engaged in RG on an average proportion of one in 10 items. Descriptive differences were noted between countries on the mean number of RG responses per examinee with discrepancies as large as 500%. Country-level differences in the odds of engaging in RG were associated with mean performance and regional membership. Furthermore, based on a two-level cross-classified hierarchical linear model, both item- and examinee-level correlates were found to moderate the likelihood of RG. Specifically, the inclusion of items with multimedia content was associated with a decrease in RG. A number of demographic and attitudinal examinee-level variables were also significant moderators, including sex, linguistic background, SES, and self-rated reading comprehension, motivation mastery, and fear of failure. The findings from this study imply that select subgroup comparisons within and across nations may be biased by differential test-taking effort. To mitigate RG in international assessments, future test developers may look to leverage technology-enhanced items.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41309048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Dropping the GRE, keeping the GRE, or GRE-optional admissions? Considering tradeoffs and fairness 放弃GRE,保留GRE,还是GRE可选录取?考虑权衡和公平
IF 1.7
International Journal of Testing Pub Date : 2022-01-02 DOI: 10.1080/15305058.2021.2019750
Daniel A. Newman, Chen Tang, Q. Song, Serena Wee
{"title":"Dropping the GRE, keeping the GRE, or GRE-optional admissions? Considering tradeoffs and fairness","authors":"Daniel A. Newman, Chen Tang, Q. Song, Serena Wee","doi":"10.1080/15305058.2021.2019750","DOIUrl":"https://doi.org/10.1080/15305058.2021.2019750","url":null,"abstract":"Abstract In considering whether to retain the GRE in graduate school admissions, admissions committees often pursue two objectives: (a) performance in graduate school (e.g., admitting individuals who will perform better in classes and research), and (b) diversity/fairness (e.g., equal selection rates between demographic groups). Drawing upon HR research (adverse impact research), we address four issues in using the GRE. First, we review the tension created between two robust findings: (a) validity of the GRE for predicting graduate school performance (rooted in the principle of standardization and a half-century of educational and psychometric research), and (b) the achievement gap in test scores between demographic groups (rooted in several centuries of systemic racism). This empirical tension can often produce a local diversity-performance tradeoff for admissions committees. Second, we use Pareto-optimal tradeoff curves to formalize potential diversity-performance tradeoffs, guiding how much weight to assign the GRE in admissions. Whether dropping the GRE produces suboptimal admissions depends upon one’s relative valuation of diversity versus performance. Third, we review three distinct notions of test fairness—equality, test equity, and performance equity—which have differing implications for dropping the GRE. Finally, we consider test fairness under GRE-optional admissions, noting the missing data problem when GRE is an incomplete variable. Supplemental data for this article is available online at","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41543647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信