{"title":"Linking Unlinkable Tests: A Step Forward","authors":"Silvia Testa, Renato Miceli, Renato Miceli","doi":"10.1111/emip.12638","DOIUrl":"https://doi.org/10.1111/emip.12638","url":null,"abstract":"<p>Random Equating (RE) and Heuristic Approach (HA) are two linking procedures that may be used to compare the scores of individuals in two tests that measure the same latent trait, in conditions where there are no common items or individuals. In this study, RE—that may only be used when the individuals taking the two tests come from the same population—was used as a benchmark for evaluating HA, which, in contrast, does not require any distributional assumptions. The comparison was based on both simulated and empirical data. Simulations showed that HA was good at reproducing the link shift connecting the difficulty parameters of the two sets of items, performing similarly to RE under the condition of slight violation of the distributional assumption. Empirical results showed satisfactory correspondence between the estimates of item and person parameters obtained via the two procedures.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"66-72"},"PeriodicalIF":2.7,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyndra V. Middleton, Comfort H. Omonkhodion, Ernest Y. Amoateng, Lucy O. Okam, Daniela Cardoza, Alexis Oakley
{"title":"From Mandated to Test-Optional College Admissions Testing: Where Do We Go from Here?","authors":"Kyndra V. Middleton, Comfort H. Omonkhodion, Ernest Y. Amoateng, Lucy O. Okam, Daniela Cardoza, Alexis Oakley","doi":"10.1111/emip.12649","DOIUrl":"https://doi.org/10.1111/emip.12649","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"33-37"},"PeriodicalIF":2.7,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating Approaches to Controlling Item Position Effects in Computerized Adaptive Tests","authors":"Ye Ma, Deborah J. Harris","doi":"10.1111/emip.12637","DOIUrl":"https://doi.org/10.1111/emip.12637","url":null,"abstract":"<p>Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item Response Theory (IRT)’s item parameter invariance assumption, which facilitates applications of IRT in various psychometric tasks such as computerized adaptive testing (CAT). Ignoring IPE might lead to issues such as inaccurate ability estimation in CAT. This article extends research on IPE by proposing and evaluating approaches to controlling position effects under an item-level computerized adaptive test via a simulation study. The results show that adjusting IPE via a pretesting design (approach 3) or a pool design (approach 4) results in better ability estimation accuracy compared to no adjustment (baseline approach) and item-level adjustment (approach 2). Practical implications of each approach as well as future research directions are discussed as well.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"44-54"},"PeriodicalIF":2.7,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Module 36: Applying Intersectionality Theory to Educational Measurement","authors":"Michael Russell","doi":"10.1111/emip.12622","DOIUrl":"https://doi.org/10.1111/emip.12622","url":null,"abstract":"<div>\u0000 \u0000 <section>\u0000 \u0000 <h3> Module Abstract</h3>\u0000 \u0000 <p>Over the past decade, interest in applying Intersectionality Theory to quantitative analyses has grown. This module examines key concepts that form the foundation of Intersectionality Theory and considers challenges and opportunities these concepts present for quantitative methods. Two examples are presented to demonstrate how an intersectional approach to quantitative analyses differs from a traditional single-axis approach. The first example employs a linear regression technique to examine the efficacy of an educational intervention and to explore whether efficacy differs among subgroups of students. The second example compares findings when a differential item function analysis is conducted in a single-axis manner versus an intersectional lens. The module ends by exploring key considerations analysts and psychometricians encounter when applying Intersectionality Theory to a quantitative analysis.</p>\u0000 </section>\u0000 </div>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"106-108"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine E. Castellano, Daniel F. McCaffrey, Joseph A. Martineau
{"title":"Demystifying Adequate Growth Percentiles","authors":"Katherine E. Castellano, Daniel F. McCaffrey, Joseph A. Martineau","doi":"10.1111/emip.12635","DOIUrl":"https://doi.org/10.1111/emip.12635","url":null,"abstract":"<p>Growth-to-standard models evaluate student growth against the growth needed to reach a future standard or target of interest, such as proficiency. A common growth-to-standard model involves comparing the popular Student Growth Percentile (SGP) to Adequate Growth Percentiles (AGPs). AGPs follow from an involved process based on fitting a series of nonlinear quantile regression models to longitudinal student test score data. This paper demystifies AGPs by deriving them in the more familiar linear regression framework. It further shows that unlike SGPs, AGPs and on-track classifications based on AGPs are strongly related to status. Lastly, AGPs are evaluated in terms of their classification accuracy. An empirical study and analytic derivations reveal AGPs can be problematic indicators of students’ future performance with previously not proficient students being more likely incorrectly flagged as not on-track and previously proficient students as on track. These classification errors have equity implications at the individual and school levels.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"31-43"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Cover: Gendered Trajectories of Digital Literacy Development: Insights from a Longitudinal Cohort Study","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12625","DOIUrl":"https://doi.org/10.1111/emip.12625","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"6"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examining the Psychometric Impact of Targeted and Random Double-Scoring in Mixed-Format Assessments","authors":"Yangmeng Xu, Stefanie A. Wind","doi":"10.1111/emip.12636","DOIUrl":"https://doi.org/10.1111/emip.12636","url":null,"abstract":"<p>Double-scoring constructed-response items is a common but costly practice in mixed-format assessments. This study explored the impacts of Targeted Double-Scoring (TDS) and random double-scoring procedures on the quality of psychometric outcomes, including student achievement estimates, person fit, and student classifications under various conditions that reflect operational performance assessments. Using a simulation study, our results suggest no notable advantages for TDS over the random double-scoring approach across various psychometric outcomes, regardless of conditions related to student misfit, rater misfit, and rater severity. This study holds significant implications for mixed-format assessments, offering insights into a comprehensive evaluation of double-scoring methods. We recommend that researchers consider these findings when considering among double-scoring procedures.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"18-30"},"PeriodicalIF":2.7,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Commentary: What Is the Breadth of “Educational Measurement?”","authors":"Marjorie Wine, Alexander M. Hoffman","doi":"10.1111/emip.12627","DOIUrl":"https://doi.org/10.1111/emip.12627","url":null,"abstract":"<p>The work of educational measurement is a highly collaborative endeavor that brings together professionals from many disciplines. While the introduction of the “Foundational Competencies in Educational Measurement” acknowledges this, the explanation of the framework itself falls short in acknowledging the competencies and skills of those from disciplines other than psychometrics, such as content development professionals (CDPs). Therefore, it is unable to sufficiently address the nature of validation work or other work not led by psychometricians. It also underexplores the vital competencies that underlie effective collaboration. As a result, it defines the competencies of psychometric work instead of the larger field of educational measurement.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"23-26"},"PeriodicalIF":2.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo J. Crespo Cruz, Aria Immanuel, Lisa A. Keller, Ketan, Kimberly McIntee, Fernando José Mena Serrano, Stephen G. Sireci, Nate Smith, Javier Suárez-Álvarez, Craig S. Wells, Rebecca Woodland, April L. Zenisky
{"title":"Commentary: What Is Truly Foundational?","authors":"Eduardo J. Crespo Cruz, Aria Immanuel, Lisa A. Keller, Ketan, Kimberly McIntee, Fernando José Mena Serrano, Stephen G. Sireci, Nate Smith, Javier Suárez-Álvarez, Craig S. Wells, Rebecca Woodland, April L. Zenisky","doi":"10.1111/emip.12633","DOIUrl":"https://doi.org/10.1111/emip.12633","url":null,"abstract":"<p>The Task Force on Foundational Competencies in Educational Measurement has produced a set of foundational competencies and invited comment on the document. The students and faculty at the University of Massachusetts Amherst provide their comments and critique of the proposed competencies. Both students and faculty agree that there needs to be more specificity regarding the purpose of the document, the nature of the data used to produce the document, and the definition of the relevant terms. Additionally, attention should be paid to the international context, and the role of artificial intelligence and machine learning. The authors acknowledge the contribution of the draft of the foundational competencies and look forward to more conversation regarding this topic.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"52-55"},"PeriodicalIF":2.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}