Applied Psychological Measurement最新文献

筛选
英文 中文
On the Use of Elbow Plot Method for Class Enumeration in Factor Mixture Models. 用弯头图法进行因子混合模型的类枚举。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-05-20 DOI: 10.1177/01466216251344288
Sedat Sen, Allan S Cohen
{"title":"On the Use of Elbow Plot Method for Class Enumeration in Factor Mixture Models.","authors":"Sedat Sen, Allan S Cohen","doi":"10.1177/01466216251344288","DOIUrl":"10.1177/01466216251344288","url":null,"abstract":"<p><p>Application of factor mixture models (FMMs) requires determining the correct number of latent classes. A number of studies have examined the performance of several information criterion (IC) indices, but as yet none have studied the effectiveness of the elbow plot method. In this study, therefore, the effectiveness of the elbow plot method was compared with the lowest value criterion and the difference method calculated from five commonly used IC indices. Results of a simulation study showed the elbow plot method to detect the generating model at least 90% of the time for two- and three-class FMMs. Results also showed the elbow plot method did not perform well for two-factor and four-class conditions. The performance of the elbow plot method was generally better than that of the lowest IC value criterion and difference method under two- and three-class conditions. For the four-latent class conditions, there were no meaningful differences between the results of the elbow plot method and the lowest value criterion method. On the other hand, the difference method outperformed the other two methods in conditions with two factors and four classes.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251344288"},"PeriodicalIF":1.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximum Marginal Likelihood Estimation of the MUPP-GGUM Model. mpup - ggum模型的最大边际似然估计。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-04-19 DOI: 10.1177/01466216251336925
Jianbin Fu
{"title":"Maximum Marginal Likelihood Estimation of the MUPP-GGUM Model.","authors":"Jianbin Fu","doi":"10.1177/01466216251336925","DOIUrl":"https://doi.org/10.1177/01466216251336925","url":null,"abstract":"","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251336925"},"PeriodicalIF":1.0,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12009269/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143990880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Rater Cognition in Performance Assessment: A Mixed IRTree Approach. 理解绩效评估中的评分者认知:一种混合IRTree方法。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-04-14 DOI: 10.1177/01466216251333578
Hung-Yu Huang
{"title":"Understanding Rater Cognition in Performance Assessment: A Mixed IRTree Approach.","authors":"Hung-Yu Huang","doi":"10.1177/01466216251333578","DOIUrl":"https://doi.org/10.1177/01466216251333578","url":null,"abstract":"<p><p>When rater-mediated assessments are conducted, human raters often appraise the performance of ratees. However, challenges arise regarding the validity of raters' judgments in reflecting ratees' competencies according to scoring rubrics. Research on rater cognition suggests that both impersonal judgments and personal preferences can influence raters' judgmental processes. This study introduces a mixed IRTree-based model for rater judgments (MIM-R), which identifies professional and novice raters by sequentially applying the ideal-point and dominance item response theory (IRT) models to the cognitive process of raters. The simulation results demonstrate a satisfactory recovery of MIM-R parameters and highlight the importance of considering the mixed nature of raters in the rating process, as neglecting this leads to more biased estimations with an increasing proportion of novice raters. An empirical example of a creativity assessment is presented to illustrate the application and implications of MIM-R.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251333578"},"PeriodicalIF":1.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11996833/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy in Invariance Detection With Multilevel Models With Three Estimators. 具有三个估计量的多水平模型的不变性检测精度。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-03-24 DOI: 10.1177/01466216251325644
W Holmes Finch, Cihan Demir, Brian F French, Thao Vo
{"title":"Accuracy in Invariance Detection With Multilevel Models With Three Estimators.","authors":"W Holmes Finch, Cihan Demir, Brian F French, Thao Vo","doi":"10.1177/01466216251325644","DOIUrl":"10.1177/01466216251325644","url":null,"abstract":"<p><p>Applied and simulation studies document model convergence and accuracy issues in differential item functioning detection with multilevel models, hindering detection. This study aimed to evaluate the effectiveness of various estimation techniques in addressing these issues and ensure robust DIF detection. We conducted a simulation study to investigate the performance of multilevel logistic regression models with predictors at level 2 across different estimation procedures, including maximum likelihood estimation (MLE), Bayesian estimation, and generalized estimating equations (GEE). The simulation results demonstrated that all maintained control over the Type I error rate across conditions. In most cases, GEE had comparable or higher power compared to MLE for identifying DIF, with Bayes having the lowest power. When potentially important covariates at levels-1 and 2 were included in the model, power for all methods was higher. These results suggest that in many cases where multilevel logistic regression is used for DIF detection, GEE offers a viable option for researchers and that including important contextual variables at all levels of the data is desirable. Implications for practice are discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251325644"},"PeriodicalIF":1.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11948245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calculating Bias in Test Score Equating in a NEAT Design. 在NEAT设计中计算考试分数等同中的偏差。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-03-24 DOI: 10.1177/01466216251330305
Marie Wiberg, Inga Laukaityte
{"title":"Calculating Bias in Test Score Equating in a NEAT Design.","authors":"Marie Wiberg, Inga Laukaityte","doi":"10.1177/01466216251330305","DOIUrl":"10.1177/01466216251330305","url":null,"abstract":"<p><p>Test score equating is used to make scores from different test forms comparable, even when groups differ in ability. In practice, the non-equivalent group with anchor test (NEAT) design is commonly used. The overall aim was to compare the amount of bias under different conditions when using either chained equating or frequency estimation with five different criterion functions: the identity function, linear equating, equipercentile, chained equating and frequency estimation. We used real test data from a multiple-choice binary scored college admissions test to illustrate that the choice of criterion function matter. Further, we simulated data in line with the empirical data to examine difference in ability between groups, difference in item difficulty, difference in anchor test form and regular test form length, difference in correlations between anchor test form and regular test forms, and different sample size. The results indicate that how bias is defined heavily affects the conclusions we draw about which equating method is to be preferred in different scenarios. Practical implications of this in standardized tests are given together with recommendations on how to calculate bias when evaluating equating transformations.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251330305"},"PeriodicalIF":1.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11948241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On a Reparameterization of the MC-DINA Model. MC-DINA模型的一种再参数化。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-03-11 DOI: 10.1177/01466216251324938
Lawrence T DeCarlo
{"title":"On a Reparameterization of the MC-DINA Model.","authors":"Lawrence T DeCarlo","doi":"10.1177/01466216251324938","DOIUrl":"10.1177/01466216251324938","url":null,"abstract":"<p><p>The MC-DINA model is a cognitive diagnosis model (CDM) for multiple-choice items that was introduced by de la Torre (2009). The model extends the usual CDM in two basic ways: it allows for nominal responses instead of only dichotomous responses, and it allows skills to affect not only the choice of the correct response but also the choice of distractors. Here it is shown that the model can be re-expressed as a multinomial logit model with latent discrete predictors, that is, as a multinomial mixture model; a signal detection-like parameterization is also used. The reparameterization clarifies details about the structure and assumptions of the model, especially with respect to distractors, and helps to reveal parameter restrictions, which in turn have implications for psychological interpretations of the data and for issues with respect to statistical estimation. The approach suggests parsimonious models that are useful for practical applications, particularly for small sample sizes. The restrictions are shown to appear for items from the TIMSS 2007 fourth grade exam.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251324938"},"PeriodicalIF":1.0,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11897991/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Within- and Between-Person Differences in the Use of the Middle Category in Likert Scales. 李克特量表中中间类别使用的人内与人间差异建模。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-03-02 DOI: 10.1177/01466216251322285
Jesper Tijmstra, Maria Bolsinova
{"title":"Modeling Within- and Between-Person Differences in the Use of the Middle Category in Likert Scales.","authors":"Jesper Tijmstra, Maria Bolsinova","doi":"10.1177/01466216251322285","DOIUrl":"10.1177/01466216251322285","url":null,"abstract":"<p><p>When using Likert scales, the inclusion of a middle-category response option poses a challenge for the valid measurement of the psychological attribute of interest. While this middle category is often included to provide respondents with a neutral response option, respondents may in practice also select this category when they do not want to or cannot give an informative response. If one analyzes the response data without considering these two possible uses of the middle response category, measurement may be confounded. In this paper, we propose a response-mixture IRTree model for the analysis of Likert-scale data. This model acknowledges that the middle response category can either be selected as a non-response option (and hence be uninformative for the attribute of interest) or to communicate a neutral position (and hence be informative), and that this choice depends on both person- and item-characteristics. For each observed middle-category response, the probability that it was intended to be informative is modeled, and both the attribute of substantive interest and a non-response tendency are estimated. The performance of the model is evaluated in a simulation study, and the procedure is applied to empirical data from personality psychology.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251322285"},"PeriodicalIF":1.0,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11873858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weighted Answer Similarity Analysis. 加权答案相似度分析。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-03-01 DOI: 10.1177/01466216251322353
Nicholas Trout, Kylie Gorney
{"title":"Weighted Answer Similarity Analysis.","authors":"Nicholas Trout, Kylie Gorney","doi":"10.1177/01466216251322353","DOIUrl":"10.1177/01466216251322353","url":null,"abstract":"<p><p>Romero et al. (2015; see also Wollack, 1997) developed the <i>ω</i> statistic as a method for detecting unusually similar answers between pairs of examinees. For each pair, the <i>ω</i> statistic considers whether the observed number of similar answers is significantly larger than the expected number of similar answers. However, one limitation of <i>ω</i> is that it does not account for the particular items on which similar answers are observed. Therefore, in this study, we propose a weighted version of the <i>ω</i> statistic that takes this information into account. We compare the performance of the new and existing statistics using detailed simulations in which several factors are manipulated. Results show that while both the new and existing statistics are able to control the Type I error rate, the new statistic is more powerful, on average.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251322353"},"PeriodicalIF":1.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11873304/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Parameter Predictability and Joint Modeling of Response Accuracy and Response Time on Ability Estimates. 参数可预测性及响应精度和响应时间联合建模对能力估计的影响。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-02-26 DOI: 10.1177/01466216251322646
Maryam Pezeshki, Susan Embretson
{"title":"Impact of Parameter Predictability and Joint Modeling of Response Accuracy and Response Time on Ability Estimates.","authors":"Maryam Pezeshki, Susan Embretson","doi":"10.1177/01466216251322646","DOIUrl":"https://doi.org/10.1177/01466216251322646","url":null,"abstract":"<p><p>To maintain test quality, a large supply of items is typically desired. Automatic item generation can result in a reduction in cost and labor, especially if the generated items have predictable item parameters and thus possibly reducing or eliminating the need for empirical tryout. However, the effect of different levels of item parameter predictability on the accuracy of trait estimation using item response theory models is unclear. If predictability is lower, adding response time as a collateral source of information may mitigate the effect on trait estimation accuracy. The present study investigates the impact of varying item parameter predictability on trait estimation accuracy, along with the impact of adding response time as a collateral source of information. Results indicated that trait estimation accuracy using item family model-based item parameters differed only slightly from using known item parameters. Somewhat larger trait estimation errors resulted from using cognitive complexity features to predict item parameters. Further, adding response times to the model resulted in more accurate trait estimation for tests with lower item difficulty levels (e.g., achievement tests). Implications for item generation and response processes aspect of validity are discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251322646"},"PeriodicalIF":1.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11866334/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few and Different: Detecting Examinees With Preknowledge Using Extended Isolation Forests. 少而不同:利用扩展隔离林检测有预见性的考生。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-02-20 DOI: 10.1177/01466216251320403
Nate R Smith, Lisa A Keller, Richard A Feinberg, Chunyan Liu
{"title":"Few and Different: Detecting Examinees With Preknowledge Using Extended Isolation Forests.","authors":"Nate R Smith, Lisa A Keller, Richard A Feinberg, Chunyan Liu","doi":"10.1177/01466216251320403","DOIUrl":"10.1177/01466216251320403","url":null,"abstract":"<p><p>Item preknowledge refers to the case where examinees have advanced knowledge of test material prior to taking the examination. When examinees have item preknowledge, the scores that result from those item responses are not true reflections of the examinee's proficiency. Further, this contamination in the data also has an impact on the item parameter estimates and therefore has an impact on scores for all examinees, regardless of whether they had prior knowledge. To ensure the validity of test scores, it is essential to identify both issues: compromised items (CIs) and examinees with preknowledge (EWPs). In some cases, the CIs are known, and the task is reduced to determining the EWPs. However, due to the potential threat to validity, it is critical for high-stakes testing programs to have a process for routinely monitoring for evidence of EWPs, often when CIs are unknown. Further, even knowing that specific items may have been compromised does not guarantee that any examinees had prior access to those items, or that those examinees that did have prior access know how to effectively use the preknowledge. Therefore, this paper attempts to use response behavior to identify item preknowledge without knowledge of which items may or may not have been compromised. While most research in this area has relied on traditional psychometric models, we investigate the utility of an unsupervised machine learning algorithm, extended isolation forest (EIF), to detect EWPs. Similar to previous research, the response behavior being analyzed is response time (RT) and response accuracy (RA).</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251320403"},"PeriodicalIF":1.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11843570/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信