{"title":"Digital Module 34: Introduction to Multilevel Measurement Modeling","authors":"Mairead Shaw, Jessica K. Flake","doi":"10.1111/emip.12585","DOIUrl":"https://doi.org/10.1111/emip.12585","url":null,"abstract":"<div>\u0000 \u0000 <section>\u0000 \u0000 <h3> Module Abstract</h3>\u0000 \u0000 <p>Clustered data structures are common in many areas of educational and psychological research (e.g., students clustered in schools, patients clustered by clinician). In the course of conducting research, questions are often administered to obtain scores reflecting latent constructs. Multilevel measurement models (MLMMs) allow for modeling measurement (the relationship of test items to constructs) and the relationships between variables in a clustered data structure. Modeling the two concurrently is important for accurately representing the relationships between items and constructs, and between constructs and other constructs/variables. The barrier to entry with MLMMs can be high, with many equations and less-documented software functionality. This module reviews two different frameworks for multilevel measurement modeling: (1) multilevel modeling and (2) structural equation modeling. We demonstrate the entire process in R with working code and available data, from preparing the dataset, through writing and running code, to interpreting and comparing output for the two approaches.</p>\u0000 </section>\u0000 </div>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12585","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138485188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing Large-Scale Assessments in Two Proctoring Modalities with Interactive Log Data Analysis","authors":"Jinnie Shin, Qi Guo, Maxim Morin","doi":"10.1111/emip.12582","DOIUrl":"10.1111/emip.12582","url":null,"abstract":"<p>With the increased restrictions on physical distancing due to the COVID-19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer-based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring modalities on examinees’ test experience, including factors like response-time data. However, the potential influence of these differences on test performance has remained unclear. One limitation in the current literature is the lack of a rigorous learning analytics framework to evaluate the comparability of computer-based exams delivered using various proctoring settings. To address this gap, the current study aims to introduce a machine-learning-based framework that analyzes computer-generated response-time data to investigate the association between proctoring modalities in high-stakes assessments. We demonstrated the effectiveness of this framework using empirical data collected from a large-scale high-stakes medical licensing exam conducted in Canada. By applying the machine-learning-based framework, we were able to extract examinee-specific response-time data for each proctoring modality and identify distinct time-use patterns among examinees based on their proctoring modality.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135934362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Terry A. Ackerman, Deborah L. Bandalos, Derek C. Briggs, Howard T. Everson, Andrew D. Ho, Susan M. Lottridge, Matthew J. Madison, Sandip Sinharay, Michael C. Rodriguez, Michael Russell, Alina A. von Davier, Stefanie A. Wind
{"title":"Foundational Competencies in Educational Measurement","authors":"Terry A. Ackerman, Deborah L. Bandalos, Derek C. Briggs, Howard T. Everson, Andrew D. Ho, Susan M. Lottridge, Matthew J. Madison, Sandip Sinharay, Michael C. Rodriguez, Michael Russell, Alina A. von Davier, Stefanie A. Wind","doi":"10.1111/emip.12581","DOIUrl":"10.1111/emip.12581","url":null,"abstract":"<p>This article presents the consensus of an National Council on Measurement in Education Presidential Task Force on Foundational Competencies in Educational Measurement. Foundational competencies are those that support future development of additional professional and disciplinary competencies. The authors develop a framework for foundational competencies in educational measurement, illustrate how educational measurement programs can help learners develop these competencies, and demonstrate how foundational competencies continue to develop in educational measurement professions. The framework introduces three foundational competency domains: Communication and Collaboration Competencies; Technical, Statistical, and Computational Competencies; and Educational Measurement Competencies. Within the Educational Measurement Competency domain, the authors identify five subdomains: Social, Cultural, Historical, and Political Context; Validity, Validation, and Fairness; Theory and Instrumentation; Precision and Generalization; and Psychometric Modeling.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136034537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Module 33: Fairness in Classroom Assessment: Dimensions and Tensions","authors":"Amirhossein Rasooli","doi":"10.1111/emip.12572","DOIUrl":"10.1111/emip.12572","url":null,"abstract":"<p>Perceptions of fairness are fundamental in building cooperation and trust, undermining conflicts, and gaining legitimacy in teacher-student relationships in classroom assessment. However, perceptions of unfairness in assessment can undermine students’ mental well-being, increase antisocial behaviors, increase psychological disengagement with learning, and threaten the belief in a fair society, fundamental to engaging in civic responsibilities. Despite the crucial role of perceived fairness in assessment, there are widespread experiences of unfairness reported by students internationally. To undermine these widespread unfair experiences, limited explicit education on promoting fairness in assessment is being delivered in graduate, preservice, and in-service training. However, it seems that explicit education is the first step in capacity building for reducing unfair perceptions and related undesirable outcomes. The purpose of this module is thus to share the findings drawn from theoretical and empirical research from various countries to provide a space for further critical reflection on best practices in enhancing fairness in classroom assessment contexts.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12572","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43265276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reached or Not Reached: A Tale of Two Data Sources","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12574","DOIUrl":"10.1111/emip.12574","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47254373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ITEMS Corner Update: Recording Audio and Adding an Editorial Polish to an ITEMS Module","authors":"Brian C. Leventhal","doi":"10.1111/emip.12573","DOIUrl":"10.1111/emip.12573","url":null,"abstract":"<p>In the first issue of <i>Educational Measurement: Issues and Practice</i> (EM:IP) in 2023, I outlined the 10 steps to the <i>Instructional Topics in Educational Measurement Series (ITEMS)</i> module development process. I then detailed the first three steps in the second issue, and in this issue, I discuss Steps 4–7, focusing on the audio recording process, editorial polish, interactive activities, and learning check development. I devote space discussing each in detail to provide readers and potential authors with a better understanding of the behind-the-scenes efforts throughout the ITEMS module development process. Following this discussion, I reiterate a call for module topics and conclude by introducing the latest entry to the ITEMS module library.</p><p>Throughout content development (Step 3), authors are encouraged to draft notes or a script for each slide to assist in audio recording. After drafted content is approved by the editorial team, the author begins Step 4: audio recording. There are no special skills or software needed to record the audio, and hardware (i.e., a microphone) is provided when necessary. Audio recording is done within PowerPoint and on each slide independently. In this sense, a 20-minute module section's audio is recorded in 1–3 minutes bits so that should re-recording be required, the author does not need to fully re-record an entire section. This also facilitates smoother transitions throughout each section, leading to a more natural speaking style. Although authors are encouraged to use a script (this is helpful should re-recording be necessary), it is emphasized that the audio should not sound like reading. Rather audio should be in a similar style to that of an instructor providing a professional workshop.</p><p>Once the audio recording is complete, the work shifts to the editorial team. During Step 5, the editorial team polishes the module content and audio. On each slide, they clean up the audio by reducing background noise, editing sections of silence, and increasing or decreasing the volume. After audio editing is complete, the editorial team adds slide transitions, object animations, and other stylistic tools to assist learning. For example, transition animations and timing assist smooth continuation of thought and content from slide to slide. Animations are synced with the audio to have bullet points appear when discussed, figures fade in when mentioned, and other content displayed systematically to not overwhelm the learner. Additional stylistic tools and techniques are employed to take advantage of the digital platform. For example, graph elements (e.g., axis labels) are animated in stages, fading into view as they are described throughout the audio to help focus the learner. Shapes, such as circles or arrows, may also be added to figures to highlight specific elements when emphasized in the audio. To assist with flow and organization, the editorial team may use additional slides or flow charts. For ","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43923249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiqin Pan, Oren Livne, James A. Wollack, Sandip Sinharay
{"title":"Item Selection Algorithm Based on Collaborative Filtering for Item Exposure Control","authors":"Yiqin Pan, Oren Livne, James A. Wollack, Sandip Sinharay","doi":"10.1111/emip.12578","DOIUrl":"10.1111/emip.12578","url":null,"abstract":"<p>In computerized adaptive testing, overexposure of items in the bank is a serious problem and might result in item compromise. We develop an item selection algorithm that utilizes the entire bank well and reduces the overexposure of items. The algorithm is based on collaborative filtering and selects an item in two stages. In the first stage, a set of candidate items whose expected performance matches the examinee's current performance is selected. In the second stage, an item that is approximately matched to the examinee's observed performance is selected from the candidate set. The expected performance of an examinee on an item is predicted by autoencoders. Experiment results show that the proposed algorithm outperforms existing item selection algorithms in terms of item exposure while incurring only a small loss in measurement precision.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42948381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement Efficiency for Technology-Enhanced and Multiple-Choice Items in a K–12 Mathematics Accountability Assessment","authors":"Ozge Ersan, Yufeng Berry","doi":"10.1111/emip.12580","DOIUrl":"10.1111/emip.12580","url":null,"abstract":"<p>The increasing use of computerization in the testing industry and the need for items potentially measuring higher-order skills have led educational measurement communities to develop technology-enhanced (TE) items and conduct validity studies on the use of TE items. Parallel to this goal, the purpose of this study was to collect validity evidence comparing item information functions, expected information values, and measurement efficiencies (item information per time unit) between multiple-choice (MC) and technology-enhanced (TE) items. The data came from K–12 mathematics large-scale accountability assessments. The study results were mainly interpreted descriptively, and the presence of specific patterns between MC and TE items was examined across grades and depth of knowledge levels. Although many earlier researchers pointed out that TE items were not as efficient as MC items, the results from the study point to ways that TE items might provide more information and were more than or equally efficient as MC items overall.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41782558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}