{"title":"词汇发展的理论驱动IRT模型:马修效应和单极IRT案例","authors":"Qi (Helen) Huang, Daniel M. Bolt, Xiangyi Liao","doi":"10.1111/jedm.12433","DOIUrl":null,"url":null,"abstract":"<p>Item response theory (IRT) encompasses a broader class of measurement models than is commonly appreciated by practitioners in educational measurement. For measures of vocabulary and its development, we show how psychological theory might in certain instances support unipolar IRT modeling as a superior alternative to the more traditional bipolar IRT models fit in practice. Although corresponding model choices make unipolar IRT statistically equivalent with bipolar IRT, adopting the unipolar approach substantially alters the resulting metric for proficiency. This shift can have substantial implications for educational research and practices that depend heavily on interval-level score interpretations. As an example, we illustrate through simulation how the perspective of unipolar IRT may account for inconsistencies seen across empirical studies in the observation (or lack thereof) of Matthew effects in reading/vocabulary development (i.e., growth being positively correlated with baseline proficiency), despite theoretical expectations for their presence. Additionally, a unipolar measurement perspective can reflect the anticipated diversification of vocabulary as proficiency level increases. Implications of unipolar IRT representations for constructing tests of vocabulary proficiency and evaluating measurement error are discussed.</p>","PeriodicalId":47871,"journal":{"name":"Journal of Educational Measurement","volume":"62 2","pages":"199-224"},"PeriodicalIF":1.6000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.12433","citationCount":"0","resultStr":"{\"title\":\"Theory-Driven IRT Modeling of Vocabulary Development: Matthew Effects and the Case for Unipolar IRT\",\"authors\":\"Qi (Helen) Huang, Daniel M. Bolt, Xiangyi Liao\",\"doi\":\"10.1111/jedm.12433\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Item response theory (IRT) encompasses a broader class of measurement models than is commonly appreciated by practitioners in educational measurement. For measures of vocabulary and its development, we show how psychological theory might in certain instances support unipolar IRT modeling as a superior alternative to the more traditional bipolar IRT models fit in practice. Although corresponding model choices make unipolar IRT statistically equivalent with bipolar IRT, adopting the unipolar approach substantially alters the resulting metric for proficiency. This shift can have substantial implications for educational research and practices that depend heavily on interval-level score interpretations. As an example, we illustrate through simulation how the perspective of unipolar IRT may account for inconsistencies seen across empirical studies in the observation (or lack thereof) of Matthew effects in reading/vocabulary development (i.e., growth being positively correlated with baseline proficiency), despite theoretical expectations for their presence. Additionally, a unipolar measurement perspective can reflect the anticipated diversification of vocabulary as proficiency level increases. Implications of unipolar IRT representations for constructing tests of vocabulary proficiency and evaluating measurement error are discussed.</p>\",\"PeriodicalId\":47871,\"journal\":{\"name\":\"Journal of Educational Measurement\",\"volume\":\"62 2\",\"pages\":\"199-224\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2025-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jedm.12433\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Educational Measurement\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jedm.12433\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Educational Measurement","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jedm.12433","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
Theory-Driven IRT Modeling of Vocabulary Development: Matthew Effects and the Case for Unipolar IRT
Item response theory (IRT) encompasses a broader class of measurement models than is commonly appreciated by practitioners in educational measurement. For measures of vocabulary and its development, we show how psychological theory might in certain instances support unipolar IRT modeling as a superior alternative to the more traditional bipolar IRT models fit in practice. Although corresponding model choices make unipolar IRT statistically equivalent with bipolar IRT, adopting the unipolar approach substantially alters the resulting metric for proficiency. This shift can have substantial implications for educational research and practices that depend heavily on interval-level score interpretations. As an example, we illustrate through simulation how the perspective of unipolar IRT may account for inconsistencies seen across empirical studies in the observation (or lack thereof) of Matthew effects in reading/vocabulary development (i.e., growth being positively correlated with baseline proficiency), despite theoretical expectations for their presence. Additionally, a unipolar measurement perspective can reflect the anticipated diversification of vocabulary as proficiency level increases. Implications of unipolar IRT representations for constructing tests of vocabulary proficiency and evaluating measurement error are discussed.
期刊介绍:
The Journal of Educational Measurement (JEM) publishes original measurement research, provides reviews of measurement publications, and reports on innovative measurement applications. The topics addressed will interest those concerned with the practice of measurement in field settings, as well as be of interest to measurement theorists. In addition to presenting new contributions to measurement theory and practice, JEM also serves as a vehicle for improving educational measurement applications in a variety of settings.