{"title":"Comparing Large-Scale Assessments in Two Proctoring Modalities with Interactive Log Data Analysis","authors":"Jinnie Shin, Qi Guo, Maxim Morin","doi":"10.1111/emip.12582","DOIUrl":null,"url":null,"abstract":"<p>With the increased restrictions on physical distancing due to the COVID-19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer-based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring modalities on examinees’ test experience, including factors like response-time data. However, the potential influence of these differences on test performance has remained unclear. One limitation in the current literature is the lack of a rigorous learning analytics framework to evaluate the comparability of computer-based exams delivered using various proctoring settings. To address this gap, the current study aims to introduce a machine-learning-based framework that analyzes computer-generated response-time data to investigate the association between proctoring modalities in high-stakes assessments. We demonstrated the effectiveness of this framework using empirical data collected from a large-scale high-stakes medical licensing exam conducted in Canada. By applying the machine-learning-based framework, we were able to extract examinee-specific response-time data for each proctoring modality and identify distinct time-use patterns among examinees based on their proctoring modality.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Measurement-Issues and Practice","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/emip.12582","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
With the increased restrictions on physical distancing due to the COVID-19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer-based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring modalities on examinees’ test experience, including factors like response-time data. However, the potential influence of these differences on test performance has remained unclear. One limitation in the current literature is the lack of a rigorous learning analytics framework to evaluate the comparability of computer-based exams delivered using various proctoring settings. To address this gap, the current study aims to introduce a machine-learning-based framework that analyzes computer-generated response-time data to investigate the association between proctoring modalities in high-stakes assessments. We demonstrated the effectiveness of this framework using empirical data collected from a large-scale high-stakes medical licensing exam conducted in Canada. By applying the machine-learning-based framework, we were able to extract examinee-specific response-time data for each proctoring modality and identify distinct time-use patterns among examinees based on their proctoring modality.