{"title":"Test Security and the Pandemic: Comparison of Test Center and Online Proctor Delivery Modalities","authors":"Kirk A. Becker, Jinghua Liu, Paul E. Jones","doi":"10.1177/01466216241248826","DOIUrl":null,"url":null,"abstract":"Published information is limited regarding the security of testing programs, and even less on the relative security of different testing modalities: in-person at test centers (TC) versus remote online proctored (OP) testing. This article begins by examining indicators of test security violations across a wide range of programs in professional, admissions, and IT fields. We look at high levels of response overlap as a potential indicator of collusion to cheat on the exam and compare rates by modality and between test center types. Next, we scrutinize indicators of potential test security violations for a single large testing program over the course of 14 months, during which the program went from exclusively in-person TC testing to a mix of OP and TC testing. Test security indicators include high response overlap, large numbers of fast correct responses, large numbers of slow correct responses, large test-retest score gains, unusually fast response times for passing candidates, and measures of differential person functioning. These indicators are examined and compared prior to and after the introduction of OP testing. In addition, test-retest modality is examined for candidates who fail and retest subsequent to the introduction of OP testing, with special attention paid to test takers who change modality between the initial attempt and the retest. These data allow us to understand whether indications of content exposure increase with the introduction of OP testing, and whether testing modalities affect potential score increase in a similar way.","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Psychological Measurement","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/01466216241248826","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Published information is limited regarding the security of testing programs, and even less on the relative security of different testing modalities: in-person at test centers (TC) versus remote online proctored (OP) testing. This article begins by examining indicators of test security violations across a wide range of programs in professional, admissions, and IT fields. We look at high levels of response overlap as a potential indicator of collusion to cheat on the exam and compare rates by modality and between test center types. Next, we scrutinize indicators of potential test security violations for a single large testing program over the course of 14 months, during which the program went from exclusively in-person TC testing to a mix of OP and TC testing. Test security indicators include high response overlap, large numbers of fast correct responses, large numbers of slow correct responses, large test-retest score gains, unusually fast response times for passing candidates, and measures of differential person functioning. These indicators are examined and compared prior to and after the introduction of OP testing. In addition, test-retest modality is examined for candidates who fail and retest subsequent to the introduction of OP testing, with special attention paid to test takers who change modality between the initial attempt and the retest. These data allow us to understand whether indications of content exposure increase with the introduction of OP testing, and whether testing modalities affect potential score increase in a similar way.
有关考试项目安全性的公开信息十分有限,而有关不同考试模式的相对安全性的信息则更少:在考试中心(TC)进行的现场考试与远程在线监考(OP)考试。本文首先研究了专业、招生和 IT 领域中各种测试项目的测试安全违规指标。我们将高水平的答题重叠作为串通作弊的潜在指标,并比较了不同模式和不同类型考试中心的作弊率。接下来,我们仔细研究了一个大型考试项目在 14 个月内的潜在考试安全违规指标,在此期间,该项目从完全的面对面 TC 考试转变为 OP 和 TC 混合考试。测试安全指标包括:高应答重叠率、大量快速正确应答、大量慢速正确应答、测试后得分大幅提高、及格考生异常快速的应答时间,以及差异人功能的测量。在引入 OP 测试之前和之后,对这些指标进行了研究和比较。此外,我们还对 OP 测试引入后未通过测试和重测的考生的重测方式进行了研究,特别关注了在初次测试和重测之间改变测试方式的考生。通过这些数据,我们可以了解内容暴露的迹象是否会随着 OP 测试的引入而增加,以及测试模式是否会以类似的方式影响潜在分数的增加。
期刊介绍:
Applied Psychological Measurement publishes empirical research on the application of techniques of psychological measurement to substantive problems in all areas of psychology and related disciplines.