{"title":"Investigating Approaches to Controlling Item Position Effects in Computerized Adaptive Tests","authors":"Ye Ma, Deborah J. Harris","doi":"10.1111/emip.12637","DOIUrl":"https://doi.org/10.1111/emip.12637","url":null,"abstract":"<p>Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item Response Theory (IRT)’s item parameter invariance assumption, which facilitates applications of IRT in various psychometric tasks such as computerized adaptive testing (CAT). Ignoring IPE might lead to issues such as inaccurate ability estimation in CAT. This article extends research on IPE by proposing and evaluating approaches to controlling position effects under an item-level computerized adaptive test via a simulation study. The results show that adjusting IPE via a pretesting design (approach 3) or a pool design (approach 4) results in better ability estimation accuracy compared to no adjustment (baseline approach) and item-level adjustment (approach 2). Practical implications of each approach as well as future research directions are discussed as well.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"44-54"},"PeriodicalIF":2.7,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital Module 36: Applying Intersectionality Theory to Educational Measurement","authors":"Michael Russell","doi":"10.1111/emip.12622","DOIUrl":"https://doi.org/10.1111/emip.12622","url":null,"abstract":"<div>\u0000 \u0000 <section>\u0000 \u0000 <h3> Module Abstract</h3>\u0000 \u0000 <p>Over the past decade, interest in applying Intersectionality Theory to quantitative analyses has grown. This module examines key concepts that form the foundation of Intersectionality Theory and considers challenges and opportunities these concepts present for quantitative methods. Two examples are presented to demonstrate how an intersectional approach to quantitative analyses differs from a traditional single-axis approach. The first example employs a linear regression technique to examine the efficacy of an educational intervention and to explore whether efficacy differs among subgroups of students. The second example compares findings when a differential item function analysis is conducted in a single-axis manner versus an intersectional lens. The module ends by exploring key considerations analysts and psychometricians encounter when applying Intersectionality Theory to a quantitative analysis.</p>\u0000 </section>\u0000 </div>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"106-108"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Cover: Gendered Trajectories of Digital Literacy Development: Insights from a Longitudinal Cohort Study","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12625","DOIUrl":"https://doi.org/10.1111/emip.12625","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"6"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katherine E. Castellano, Daniel F. McCaffrey, Joseph A. Martineau
{"title":"Demystifying Adequate Growth Percentiles","authors":"Katherine E. Castellano, Daniel F. McCaffrey, Joseph A. Martineau","doi":"10.1111/emip.12635","DOIUrl":"https://doi.org/10.1111/emip.12635","url":null,"abstract":"<p>Growth-to-standard models evaluate student growth against the growth needed to reach a future standard or target of interest, such as proficiency. A common growth-to-standard model involves comparing the popular Student Growth Percentile (SGP) to Adequate Growth Percentiles (AGPs). AGPs follow from an involved process based on fitting a series of nonlinear quantile regression models to longitudinal student test score data. This paper demystifies AGPs by deriving them in the more familiar linear regression framework. It further shows that unlike SGPs, AGPs and on-track classifications based on AGPs are strongly related to status. Lastly, AGPs are evaluated in terms of their classification accuracy. An empirical study and analytic derivations reveal AGPs can be problematic indicators of students’ future performance with previously not proficient students being more likely incorrectly flagged as not on-track and previously proficient students as on track. These classification errors have equity implications at the individual and school levels.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"31-43"},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examining the Psychometric Impact of Targeted and Random Double-Scoring in Mixed-Format Assessments","authors":"Yangmeng Xu, Stefanie A. Wind","doi":"10.1111/emip.12636","DOIUrl":"https://doi.org/10.1111/emip.12636","url":null,"abstract":"<p>Double-scoring constructed-response items is a common but costly practice in mixed-format assessments. This study explored the impacts of Targeted Double-Scoring (TDS) and random double-scoring procedures on the quality of psychometric outcomes, including student achievement estimates, person fit, and student classifications under various conditions that reflect operational performance assessments. Using a simulation study, our results suggest no notable advantages for TDS over the random double-scoring approach across various psychometric outcomes, regardless of conditions related to student misfit, rater misfit, and rater severity. This study holds significant implications for mixed-format assessments, offering insights into a comprehensive evaluation of double-scoring methods. We recommend that researchers consider these findings when considering among double-scoring procedures.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"18-30"},"PeriodicalIF":2.7,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Commentary: What Is the Breadth of “Educational Measurement?”","authors":"Marjorie Wine, Alexander M. Hoffman","doi":"10.1111/emip.12627","DOIUrl":"https://doi.org/10.1111/emip.12627","url":null,"abstract":"<p>The work of educational measurement is a highly collaborative endeavor that brings together professionals from many disciplines. While the introduction of the “Foundational Competencies in Educational Measurement” acknowledges this, the explanation of the framework itself falls short in acknowledging the competencies and skills of those from disciplines other than psychometrics, such as content development professionals (CDPs). Therefore, it is unable to sufficiently address the nature of validation work or other work not led by psychometricians. It also underexplores the vital competencies that underlie effective collaboration. As a result, it defines the competencies of psychometric work instead of the larger field of educational measurement.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"23-26"},"PeriodicalIF":2.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo J. Crespo Cruz, Aria Immanuel, Lisa A. Keller, Ketan, Kimberly McIntee, Fernando José Mena Serrano, Stephen G. Sireci, Nate Smith, Javier Suárez-Álvarez, Craig S. Wells, Rebecca Woodland, April L. Zenisky
{"title":"Commentary: What Is Truly Foundational?","authors":"Eduardo J. Crespo Cruz, Aria Immanuel, Lisa A. Keller, Ketan, Kimberly McIntee, Fernando José Mena Serrano, Stephen G. Sireci, Nate Smith, Javier Suárez-Álvarez, Craig S. Wells, Rebecca Woodland, April L. Zenisky","doi":"10.1111/emip.12633","DOIUrl":"https://doi.org/10.1111/emip.12633","url":null,"abstract":"<p>The Task Force on Foundational Competencies in Educational Measurement has produced a set of foundational competencies and invited comment on the document. The students and faculty at the University of Massachusetts Amherst provide their comments and critique of the proposed competencies. Both students and faculty agree that there needs to be more specificity regarding the purpose of the document, the nature of the data used to produce the document, and the definition of the relevant terms. Additionally, attention should be paid to the international context, and the role of artificial intelligence and machine learning. The authors acknowledge the contribution of the draft of the foundational competencies and look forward to more conversation regarding this topic.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"52-55"},"PeriodicalIF":2.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Past, Present, and Future of Large-Scale Assessment Consortia","authors":"Derek C. Briggs","doi":"10.1111/emip.12634","DOIUrl":"10.1111/emip.12634","url":null,"abstract":"<p>This article provides a history of the two large-scale assessment consortia that were funded in 2010 as part of the Race to the Top Competition, the Partnership for the Assessment of Readiness for College and Career (PARCC), and the Smarter-Balanced Assessment Consortium (SBAC). I compare the goals the consortia were funded to meet between 2011 and 2015 with the assessment systems PARCC and SBAC were able to deliver. Two notable accomplishments of both consortia were the development of high-quality summative assessments that were carefully designed to be accessible to all test-takers. I describe factors that led many states to withdraw from consortia membership between 2011 and 2015, and which led to the effective dissolution of PARCC. I discuss lessons learned from serving on the PARCC and SBAC technical advisory committees, and make a case for the value proposition of assessment consortia in the future.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"62-72"},"PeriodicalIF":2.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Commentary: Where Does Classroom Assessment Fit in Educational Measurement?","authors":"Susan M. Brookhart","doi":"10.1111/emip.12626","DOIUrl":"10.1111/emip.12626","url":null,"abstract":"<p>This article is a commentary on the Foundational Competencies in Educational Measurement (Ackerman et al., 2024, hereafter the FCs) through the lens of classroom assessment. I focus on the place of classroom assessment in the FCs and in NCME and educational measurement more broadly.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"18-22"},"PeriodicalIF":2.7,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}