Applied Psychological Measurement最新文献

筛选
英文 中文
Structure-Based Classification Approach. 基于结构的分类方法。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-07-14 DOI: 10.1177/01466216251360544
Jongwan Kim
{"title":"Structure-Based Classification Approach.","authors":"Jongwan Kim","doi":"10.1177/01466216251360544","DOIUrl":"10.1177/01466216251360544","url":null,"abstract":"<p><p>This study introduces a novel structure-based classification (SBC) framework that leverages pairwise distance representations of rating data to enhance classification performance while mitigating individual differences in scale usage. Unlike conventional feature-based approaches that rely on absolute rating scores, SBC transforms rating data into structured representations by computing pairwise distances between rating dimensions. This transformation captures the relational structure of ratings, ensuring consistency between training and test datasets and enhancing model robustness. To evaluate the effectiveness of this approach, we conducted a simulation study in which participants rated stimuli across multiple affective dimensions, with systematic individual differences in scale usage. The results demonstrated that SBC successfully classified affective stimuli despite these variations, performing comparably to traditional classification methods. The findings suggest that relational structures among rating dimensions contain meaningful information for affective classification, akin to functional connectivity approaches in cognitive neuroscience. By focusing on rating interdependencies as well as absolute values, SBC provides a robust and generalizable method for analyzing subjective responses, with implications for psychological research.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251360544"},"PeriodicalIF":1.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12264251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144660749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Including Empirical Prior Information in the Reliable Change Index. 在可靠变化指数中加入经验先验信息。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-07-10 DOI: 10.1177/01466216251358492
R Philip Chalmers, Sarah Campbell
{"title":"Including Empirical Prior Information in the Reliable Change Index.","authors":"R Philip Chalmers, Sarah Campbell","doi":"10.1177/01466216251358492","DOIUrl":"10.1177/01466216251358492","url":null,"abstract":"<p><p>The reliable change index (RCI; Jacobson & Truax, 1991) is commonly used to assess whether individuals have changed across two measurement occasions, and has seen many augmentations and improvements since its initial conception. In this study, we extend an item response theory version of the RCI presented by Jabrayilov et al. (2016) by including empirical priors in the associated RCI computations whenever group-level differences are quantifiable given post-test response information. Based on a reanalysis and extension of a previous simulation study, we demonstrate that although a small amount of bias is added to the estimates of the latent trait differences when no true change is present, including empirical prior information will generally improve the Type I behavior of the model-based RCI. Consequently, when non-zero changes in the latent trait are present the bias and sampling variability are show to be more favorable than competing estimators, subsequently leading to an increase in power to detect non-zero changes.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251358492"},"PeriodicalIF":1.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12245826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Group Differences in True Score Relationships to Evaluate Measurement Bias. 用真分关系的组差异评价测量偏倚。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-07-07 DOI: 10.1177/01466216251358491
Michael T Kane, Joanne Kane
{"title":"Using Group Differences in True Score Relationships to Evaluate Measurement Bias.","authors":"Michael T Kane, Joanne Kane","doi":"10.1177/01466216251358491","DOIUrl":"10.1177/01466216251358491","url":null,"abstract":"<p><p>This paper makes three contributions to our understanding of measurement bias and predictive bias in testing. First, we develop a linear model for assessing measurement bias across two tests and two groups in terms of the estimated true-score relationships between the two tests in the two groups. This new model for measurement bias is structurally similar to the Cleary model for predictive bias, but it relies on the Errors-in-Variables (EIV) regression model, rather than the Ordinary-Least-Squares (OLS) regression model. Second, we examine some differences between measurement bias and predictive bias in three cases in which two groups have different true-score means, and we illustrate how regression toward the mean in OLS regression can lead to questionable conclusions about test bias if the differences between measurement bias and predictive bias are ignored. Third, we reevaluate a body of empirical findings suggesting that the tests employed in college-admissions and employment-testing programs tend to over-predict criterion performance for minorities, and we show that these findings are consistent with the occurrence of substantial measurement bias against the minority group relative to the majority group.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251358491"},"PeriodicalIF":1.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12234520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144601949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Standard Error Estimation for Subpopulation Non-invariance. 亚总体非不变性的标准误差估计。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-07-05 DOI: 10.1177/01466216251351947
Paul A Jewsbury
{"title":"Standard Error Estimation for Subpopulation Non-invariance.","authors":"Paul A Jewsbury","doi":"10.1177/01466216251351947","DOIUrl":"10.1177/01466216251351947","url":null,"abstract":"<p><p>Score linking is widely used to place scores from different assessments, or the same assessment under different conditions, onto a common scale. A central concern is whether the linking function is invariant across subpopulations, as violations may threaten fairness. However, evaluating subpopulation differences in linked scores is challenging because linking error is not independent of sampling and measurement error when the same data are used to estimate the linking function and to compare score distributions. We show that common approaches involving neglecting linking error or treating it as independent substantially overestimate the standard errors of subpopulation differences. We introduce new methods that account for linking error dependencies. Simulation results demonstrate the accuracy of the proposed methods, and a practical example with real data illustrates how improved standard error estimation enhances power for detecting subpopulation non-invariance.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251351947"},"PeriodicalIF":1.0,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12228644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting DIF with the Multi-Unidimensional Pairwise Preference Model: Lord's Chi-square and IPR-NCDIF Methods. 用多维配对偏好模型检测DIF: Lord卡方和IPR-NCDIF方法。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-07-01 DOI: 10.1177/01466216251351949
Lavanya S Kumar, Naidan Tu, Sean Joo, Stephen Stark
{"title":"Detecting DIF with the Multi-Unidimensional Pairwise Preference Model: Lord's Chi-square and IPR-NCDIF Methods.","authors":"Lavanya S Kumar, Naidan Tu, Sean Joo, Stephen Stark","doi":"10.1177/01466216251351949","DOIUrl":"10.1177/01466216251351949","url":null,"abstract":"<p><p>Multidimensional forced choice (MFC) measures are gaining prominence in noncognitive assessment. Yet there has been little research on detecting differential item functioning (DIF) with models for forced choice measures. This research extended two well-known DIF detection methods to MFC measures. Specifically, the performance of Lord's chi-square and item parameter replication (IPR) methods with MFC tests based on the Multi-Unidimensional Pairwise Preference (MUPP) model was investigated. The Type I error rate and power of the DIF detection methods were examined in a Monte Carlo simulation that manipulated sample size, impact, DIF source, and DIF magnitude. Both methods showed consistent power and were found to control Type I error well across study conditions, indicating that established approaches to DIF detection work well with the MUPP model. Lord's chi-square outperformed the IPR method when DIF source was statement discrimination while the opposite was true when DIF source was statement threshold. Also, both methods performed similarly and showed better power when DIF source was statement location, in line with previous research. Study implications and practical recommendations for DIF detection with MFC tests, as well as limitations, are discussed.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251351949"},"PeriodicalIF":1.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12213542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How to Make Sense of Reliability? Common Language Interpretation of Reliability and the Relation of Reliability to Effect Size. 如何理解可靠性?信度的共同语言解释及信度与效应量的关系。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-06-24 DOI: 10.1177/01466216251350159
Jari Metsämuuronen, Timi Niemensivu
{"title":"How to Make Sense of Reliability? Common Language Interpretation of Reliability and the Relation of Reliability to Effect Size.","authors":"Jari Metsämuuronen, Timi Niemensivu","doi":"10.1177/01466216251350159","DOIUrl":"10.1177/01466216251350159","url":null,"abstract":"<p><p>Communicating the factual meaning of a particular reliability estimate is sometimes difficult. What does a specific reliability estimate of 0.80 or 0.95 mean in common language? Deflation-corrected estimates of reliability (DCER) using Somers' <i>D</i> or Goodman-Kruskal <i>G</i> as the item-score correlations are transformed into forms where specific estimates from the family of common language effect sizes are visible. This makes it possible to communicate reliability estimates using a common language and to evaluate the magnitude of a particular reliability estimate in the same way and with the same metric as we do with effect size estimates. Using a DCER, we can say that with <i>k</i> = 40 items, if the reliability is 0.95, in 80 out of 100 random pairs of test takers from different subpopulations on all items combined, those with a higher item response will also score higher on the test. In this case, using the thresholds familiar from effect sizes, we can say that the reliability is \"very high.\" The transformation of the reliability estimate into a common language effect size depends on the size of the item-score association estimates and the number of items, so no closed-form equations for the transformations are given. However, relevant thresholds are provided for practical use.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251350159"},"PeriodicalIF":1.0,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12187714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144508891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increase of Uncertainty in Summed-Score-Based Scoring in Non-Rasch IRT. 非rasch IRT中基于总分评分的不确定性增加。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-06-12 DOI: 10.1177/01466216251350342
Eisuke Segawa
{"title":"Increase of Uncertainty in Summed-Score-Based Scoring in Non-Rasch IRT.","authors":"Eisuke Segawa","doi":"10.1177/01466216251350342","DOIUrl":"10.1177/01466216251350342","url":null,"abstract":"<p><p>Summed-score (SS)-based scoring in non-Rasch IRT allows for pencil-and-paper administration and is used in the Patient-Reported Outcomes Measurement Information System (PROMIS) alongside response-pattern-based scoring. However, this convenience comes with an increase in uncertainty (the increase) associated with SS scoring. The increase can be quantified through the relationship between Bayesian SS and RP scoring. Given an SS of s, the SS posterior is a weighted sum of RP posteriors, with weights representing the marginal probabilities of RPs. From this mixture, the SS score (SS posterior mean) is a weighted sum of RP posterior means, and its uncertainty (variance of the SS posterior) is decomposed into the uncertainty of RP scoring (the weighted sum of RP posterior variances) and the increase (variance of RP posterior means). Without quantifying the increase, PROMIS recommends RP scoring for greater accuracy, suggesting SS scoring as a second option. Using variance decomposition, we quantified the increases for two short forms (SFs). In one, the increase is very small, making SS scoring as accurate as RP scoring, while in the other, the increase is large, indicating SS scoring may not be a viable second option. The increase varies widely, influencing scoring decisions, and should be reported for each SF when SS scoring is used.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251350342"},"PeriodicalIF":1.0,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162545/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
tna: An R Package for Transition Network Analysis. 一个用于转换网络分析的R包。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-06-05 DOI: 10.1177/01466216251348840
Santtu Tikka, Sonsoles López-Pernas, Mohammed Saqr
{"title":"tna: An R Package for Transition Network Analysis.","authors":"Santtu Tikka, Sonsoles López-Pernas, Mohammed Saqr","doi":"10.1177/01466216251348840","DOIUrl":"10.1177/01466216251348840","url":null,"abstract":"<p><p>Understanding the dynamics of transitions plays a central role in educational research, informing studies of learning processes, motivation shifts, and social interactions. Transition network analysis (TNA) is a unified framework of probabilistic modeling and network analysis for capturing the temporal and relational aspects of transitions between events or states of interest. We introduce the R package tna that implements procedures for estimating the TNA models, building the transition networks, identifying patterns and communities, computing centrality measures, and visualizing the networks. The package also implements several functions for statistical procedures that can be used to assess differences between groups, stability of centrality measures and importance of specific transitions.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251348840"},"PeriodicalIF":1.0,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12141252/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Use of Elbow Plot Method for Class Enumeration in Factor Mixture Models. 用弯头图法进行因子混合模型的类枚举。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-05-20 DOI: 10.1177/01466216251344288
Sedat Sen, Allan S Cohen
{"title":"On the Use of Elbow Plot Method for Class Enumeration in Factor Mixture Models.","authors":"Sedat Sen, Allan S Cohen","doi":"10.1177/01466216251344288","DOIUrl":"10.1177/01466216251344288","url":null,"abstract":"<p><p>Application of factor mixture models (FMMs) requires determining the correct number of latent classes. A number of studies have examined the performance of several information criterion (IC) indices, but as yet none have studied the effectiveness of the elbow plot method. In this study, therefore, the effectiveness of the elbow plot method was compared with the lowest value criterion and the difference method calculated from five commonly used IC indices. Results of a simulation study showed the elbow plot method to detect the generating model at least 90% of the time for two- and three-class FMMs. Results also showed the elbow plot method did not perform well for two-factor and four-class conditions. The performance of the elbow plot method was generally better than that of the lowest IC value criterion and difference method under two- and three-class conditions. For the four-latent class conditions, there were no meaningful differences between the results of the elbow plot method and the lowest value criterion method. On the other hand, the difference method outperformed the other two methods in conditions with two factors and four classes.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251344288"},"PeriodicalIF":1.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12092417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximum Marginal Likelihood Estimation of the MUPP-GGUM Model. mpup - ggum模型的最大边际似然估计。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-04-19 DOI: 10.1177/01466216251336925
Jianbin Fu
{"title":"Maximum Marginal Likelihood Estimation of the MUPP-GGUM Model.","authors":"Jianbin Fu","doi":"10.1177/01466216251336925","DOIUrl":"https://doi.org/10.1177/01466216251336925","url":null,"abstract":"","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251336925"},"PeriodicalIF":1.0,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12009269/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143990880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信