Applied Psychological Measurement最新文献

筛选
英文 中文
Application of Bayesian Decision Theory in Detecting Test Fraud.
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-01-27 DOI: 10.1177/01466216251316559
Sandip Sinharay, Matthew S Johnson
{"title":"Application of Bayesian Decision Theory in Detecting Test Fraud.","authors":"Sandip Sinharay, Matthew S Johnson","doi":"10.1177/01466216251316559","DOIUrl":"https://doi.org/10.1177/01466216251316559","url":null,"abstract":"<p><p>This article suggests a new approach based on Bayesian decision theory (e.g., Cronbach & Gleser, 1965; Ferguson, 1967) for detection of test fraud. The approach leads to a simple decision rule that involves the computation of the posterior probability that an examinee committed test fraud given the data. The suggested approach was applied to a real data set that involved actual test fraud.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316559"},"PeriodicalIF":1.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773507/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
R Package for Calculating Estimators of the Proportion of Explained Variance and Standardized Regression Coefficients in Multiply Imputed Datasets.
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-01-26 DOI: 10.1177/01466216251316275
Joost R van Ginkel, Julian D Karch
{"title":"R Package for Calculating Estimators of the Proportion of Explained Variance and Standardized Regression Coefficients in Multiply Imputed Datasets.","authors":"Joost R van Ginkel, Julian D Karch","doi":"10.1177/01466216251316275","DOIUrl":"10.1177/01466216251316275","url":null,"abstract":"","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316275"},"PeriodicalIF":1.0,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770685/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Experimental Design to Investigate Item Parameter Drift.
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2025-01-24 DOI: 10.1177/01466216251316282
Peter Baldwin, Irina Grabovsky, Kimberly A Swygert, Thomas Fogle, Pilar Reid, Brian E Clauser
{"title":"An Experimental Design to Investigate Item Parameter Drift.","authors":"Peter Baldwin, Irina Grabovsky, Kimberly A Swygert, Thomas Fogle, Pilar Reid, Brian E Clauser","doi":"10.1177/01466216251316282","DOIUrl":"10.1177/01466216251316282","url":null,"abstract":"<p><p>Methods for detecting item parameter drift may be inadequate when every exposed item is at risk for drift. To address this scenario, a strategy for detecting item parameter drift is proposed that uses only unexposed items deployed in a stratified random method within an experimental design. The proposed method is illustrated by investigating unexpected score increases on a high-stakes licensure exam. Results for this example were suggestive of item parameter drift but not significant at the .05 level.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316282"},"PeriodicalIF":1.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11760077/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Measurement of Change in the Context of Item Parameter Drift. 项目参数漂移情况下变化的自适应测量。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-12-30 DOI: 10.1177/01466216241310599
Allison W Cooperman, Ming Him Tai, Joseph N DeWeese, David J Weiss
{"title":"Adaptive Measurement of Change in the Context of Item Parameter Drift.","authors":"Allison W Cooperman, Ming Him Tai, Joseph N DeWeese, David J Weiss","doi":"10.1177/01466216241310599","DOIUrl":"10.1177/01466216241310599","url":null,"abstract":"<p><p>Adaptive measurement of change (AMC) uses computerized adaptive testing (CAT) to measure and test the significance of intraindividual change on one or more latent traits. The extant AMC research has so far assumed that item parameter values are constant across testing occasions. Yet item parameters might change over time, a phenomenon termed item parameter drift (IPD). The current study examined AMC's performance in the context of IPD with unidimensional, dichotomous CATs across two testing occasions. A Monte Carlo simulation revealed that AMC false and true positive rates were primarily affected by changes in the difficulty parameter. False positive rates were related to the location of the drift items relative to the latent trait continuum, as the administration of more drift items spuriously increased the magnitude of estimated trait change. Moreover, true positive rates depended upon an interaction between the direction of difficulty parameter drift and the latent trait change trajectory. A follow-up simulation further showed that the number of items in the CAT with parameter drift impacted AMC false and true positive rates, with these relationships moderated by IPD characteristics and the latent trait change trajectory. It is recommended that test administrators confirm the absence of IPD prior to using AMC for measuring intraindividual change with educational and psychological tests.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310599"},"PeriodicalIF":1.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142915981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference of Correlations Among Testlet Effects: A Latent Variable Selection Method. 小测验效应之间相关性的推断:潜变量选择法
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-12-26 DOI: 10.1177/01466216241310598
Xin Xu, Jinxin Guo, Tao Xin
{"title":"Inference of Correlations Among Testlet Effects: A Latent Variable Selection Method.","authors":"Xin Xu, Jinxin Guo, Tao Xin","doi":"10.1177/01466216241310598","DOIUrl":"10.1177/01466216241310598","url":null,"abstract":"<p><p>In psychological and educational measurement, a testlet-based test is a common and popular format, especially in some large-scale assessments. In modeling testlet effects, a standard bifactor model, as a common strategy, assumes different testlet effects and the main effect to be fully independently distributed. However, it is difficult to establish perfectly independent clusters as this assumption. To address this issue, correlations among testlets could be taken into account in fitting data. Moreover, one may desire to maintain a good practical interpretation of the sparse loading matrix. In this paper, we propose data-driven learning of significant correlations in the covariance matrix through a latent variable selection method. Under the proposed method, a regularization is performed on the weak correlations for the extended bifactor model. Further, a stochastic expectation maximization algorithm is employed for efficient computation. Results from simulation studies show the consistency of the proposed method in selecting significant correlations. Empirical data from the 2015 Program for International Student Assessment is analyzed using the proposed method as an example.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310598"},"PeriodicalIF":1.0,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11670239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Information Manifold Perspective for Analyzing Test Data. 从信息流形的角度分析测试数据。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-12-20 DOI: 10.1177/01466216241310600
James O Ramsay, Juan Li, Joakim Wallmark, Marie Wiberg
{"title":"An Information Manifold Perspective for Analyzing Test Data.","authors":"James O Ramsay, Juan Li, Joakim Wallmark, Marie Wiberg","doi":"10.1177/01466216241310600","DOIUrl":"10.1177/01466216241310600","url":null,"abstract":"<p><p>Modifications of current psychometric models for analyzing test data are proposed that produce an additive scale measure of information. This information measure is a one-dimensional space curve or curved surface manifold that is invariant across varying manifold indexing systems. The arc length along a curve manifold is used as it is an additive metric having a defined zero and a version of the bit as a unit. This property, referred to here as the scope of the test or an item, facilitates the evaluation of graphs and numerical summaries. The measurement power of the test is defined by the length of the manifold, and the performance or experiential level of a person by a position along the curve. In this study, we also use all information from the items including the information from the distractors. Test data from a large-scale college admissions test are used to illustrate the test information manifold perspective and to compare it with the well-known item response theory nominal model. It is illustrated that the use of information theory opens a vista of new ways of assessing item performance and inter-item dependency, as well as test takers' knowledge.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310600"},"PeriodicalIF":1.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11662344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Generalized Multi-Detector Combination Approach for Differential Item Functioning Detection. 差分项目功能检测的一种广义多检测器组合方法。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-12-19 DOI: 10.1177/01466216241310602
Shan Huang, Hidetoki Ishii
{"title":"A Generalized Multi-Detector Combination Approach for Differential Item Functioning Detection.","authors":"Shan Huang, Hidetoki Ishii","doi":"10.1177/01466216241310602","DOIUrl":"10.1177/01466216241310602","url":null,"abstract":"<p><p>Many studies on differential item functioning (DIF) detection rely on single detection methods (SDMs), each of which necessitates specific assumptions that may not always be validated. Using an inappropriate SDM can lead to diminished accuracy in DIF detection. To address this limitation, a novel multi-detector combination (MDC) approach is proposed. Unlike SDMs, MDC effectively evaluates the relevance of different SDMs under various test conditions and integrates them using supervised learning, thereby mitigating the risk associated with selecting a suboptimal SDM for DIF detection. This study aimed to validate the accuracy of the MDC approach by applying five types of SDMs and four distinct supervised learning methods in MDC modeling. Model performance was assessed using the area under the curve (AUC), which provided a comprehensive measure of the ability of the model to distinguish between classes across all threshold levels, with higher AUC values indicating higher accuracy. The MDC methods consistently achieved higher average AUC values compared to SDMs in both matched test sets (where test conditions align with the training set) and unmatched test sets. Furthermore, MDC outperformed all SDMs under each test condition. These findings indicated that MDC is highly accurate and robust across diverse test conditions, establishing it as a viable method for practical DIF detection.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310602"},"PeriodicalIF":1.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660104/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Construct Validity of Instructional Manipulation Checks as Measures of Careless Responding to Surveys. 评估教学操纵检查的结构有效性,以此衡量对调查的粗心答复。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-11-01 Epub Date: 2024-09-20 DOI: 10.1177/01466216241284293
Mark C Ramsey, Nathan A Bowling, Preston S Menke
{"title":"Evaluating the Construct Validity of Instructional Manipulation Checks as Measures of Careless Responding to Surveys.","authors":"Mark C Ramsey, Nathan A Bowling, Preston S Menke","doi":"10.1177/01466216241284293","DOIUrl":"10.1177/01466216241284293","url":null,"abstract":"<p><p>Careless responding measures are important for several purposes, whether it's screening for careless responding or for research centered on careless responding as a substantive variable. One such approach for assessing carelessness in surveys is the use of an instructional manipulation check. Despite its apparent popularity, little is known about the construct validity of instructional manipulation checks as measures of careless responding. Initial results are inconclusive, and no study has thoroughly evaluated the validity of the instructional manipulation check as a measure of careless responding. Across 2 samples (<i>N</i> = 762), we evaluated the construct validity of the instructional manipulation check under a nomological network. We found that the instructional manipulation check converged poorly with other measures of careless responding, weakly predicted participant inability to recognize study content, and did not display incremental validity over existing measures of careless responding. Additional analyses revealed that instructional manipulation checks performed poorly compared to single scores of other alternative careless responding measures and that screening data with alternative measures of careless responding produced greater or similar gains in data quality to instructional manipulation checks. Based on the results of our studies, we do not recommend using instructional manipulation checks to assess or screen for careless responding to surveys.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"48 7-8","pages":"341-356"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11501094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Test-Retest Reliability in the Presence of Self-Selection Bias and Learning/Practice Effects. 估计存在自我选择偏差和学习/练习效应时的测验-重测信度。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-11-01 Epub Date: 2024-09-17 DOI: 10.1177/01466216241284585
William C M Belzak, J R Lockwood
{"title":"Estimating Test-Retest Reliability in the Presence of Self-Selection Bias and Learning/Practice Effects.","authors":"William C M Belzak, J R Lockwood","doi":"10.1177/01466216241284585","DOIUrl":"https://doi.org/10.1177/01466216241284585","url":null,"abstract":"<p><p>Test-retest reliability is often estimated using naturally occurring data from test repeaters. In settings such as admissions testing, test takers choose if and when to retake an assessment. This self-selection can bias estimates of test-retest reliability because individuals who choose to retest are typically unrepresentative of the broader testing population and because differences among test takers in learning or practice effects may increase with time between test administrations. We develop a set of methods for estimating test-retest reliability from observational data that can mitigate these sources of bias, which include sample weighting, polynomial regression, and Bayesian model averaging. We demonstrate the value of using these methods for reducing bias and improving precision of estimated reliability using empirical and simulated data, both of which are based on more than 40,000 repeaters of a high-stakes English language proficiency test. Finally, these methods generalize to settings in which only a single, error-prone measurement is taken repeatedly over time and where self-selection and/or changes to the underlying construct may be at play.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"48 7-8","pages":"323-340"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528726/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142569674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Mark-Recapture Approach to Estimating Item Pool Compromise. 估算项目池妥协的标记重捕方法。
IF 1 4区 心理学
Applied Psychological Measurement Pub Date : 2024-11-01 Epub Date: 2024-09-13 DOI: 10.1177/01466216241284410
Richard A Feinberg
{"title":"A Mark-Recapture Approach to Estimating Item Pool Compromise.","authors":"Richard A Feinberg","doi":"10.1177/01466216241284410","DOIUrl":"10.1177/01466216241284410","url":null,"abstract":"<p><p>Testing organizations routinely investigate if secure exam material has been compromised and is consequently invalid for scoring and inclusion on future assessments. Beyond identifying individual compromised items, knowing the degree to which a form is compromised can inform decisions on whether the form can no longer be administered or when an item pool is compromised to such an extent that serious action on a broad scale must be taken to ensure the validity of score interpretations. Previous research on estimating the population of item compromise is sparse; however, this is a more generally long-studied problem in ecological research. In this note, we exemplify the utility of the mark-recapture technique to estimate the population of compromised items, first through a brief demonstration to introduce the fundamental concepts and then a more realistic scenario to illustrate applicability to large-scale testing programs. An effective use of this technique would be to longitudinally track changes in the estimated population to inform operational test security strategies. Many variations on mark-recapture exist and interpretation of the estimated population depends on several factors. Thus, this note is only meant to introduce the concept of mark-recapture as a useful application to evaluate a testing organization's compromise mitigation procedures.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"48 7-8","pages":"357-363"},"PeriodicalIF":1.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528777/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142569673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信