Comparison of different reliability estimation methods for single-item assessment: a simulation study.

IF 2.6 3区 心理学 Q2 PSYCHOLOGY, MULTIDISCIPLINARY
Frontiers in Psychology Pub Date : 2024-11-01 eCollection Date: 2024-01-01 DOI:10.3389/fpsyg.2024.1482016
Sijun Zhang, Kimberly Colvin
{"title":"Comparison of different reliability estimation methods for single-item assessment: a simulation study.","authors":"Sijun Zhang, Kimberly Colvin","doi":"10.3389/fpsyg.2024.1482016","DOIUrl":null,"url":null,"abstract":"<p><p>Single-item assessments have recently become popular in various fields, and researchers have developed methods for estimating the reliability of single-item assessments, some based on factor analysis and correction for attenuation, and others using the double monotonicity model, Guttman's λ<sub>6</sub>, or the latent class model. However, no empirical study has investigated which method best estimates the reliability of single-item assessments. This study investigated this question using a simulation study. To represent assessments as they are found in practice, the simulation study varied several aspects: the item discrimination parameter, the test length of the multi-item assessment of the same construct, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct. The results suggest that by using the method based on the double monotonicity model and the method based on correction for attenuation simultaneously, researchers can obtain the most precise estimate of the range of reliability of a single-item assessment in 94.44% of cases. The test length of a multi-item assessment of the same construct, the item discrimination parameter, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct did not influence the choice of method choice.</p>","PeriodicalId":12525,"journal":{"name":"Frontiers in Psychology","volume":"15 ","pages":"1482016"},"PeriodicalIF":2.6000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11568483/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3389/fpsyg.2024.1482016","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Single-item assessments have recently become popular in various fields, and researchers have developed methods for estimating the reliability of single-item assessments, some based on factor analysis and correction for attenuation, and others using the double monotonicity model, Guttman's λ6, or the latent class model. However, no empirical study has investigated which method best estimates the reliability of single-item assessments. This study investigated this question using a simulation study. To represent assessments as they are found in practice, the simulation study varied several aspects: the item discrimination parameter, the test length of the multi-item assessment of the same construct, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct. The results suggest that by using the method based on the double monotonicity model and the method based on correction for attenuation simultaneously, researchers can obtain the most precise estimate of the range of reliability of a single-item assessment in 94.44% of cases. The test length of a multi-item assessment of the same construct, the item discrimination parameter, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct did not influence the choice of method choice.

单项评估中不同可靠性估计方法的比较:模拟研究。
近来,单项测评在各个领域都很流行,研究人员也开发出了一些估算单项测评信度的方法,有的基于因子分析和衰减校正,有的则使用双单调模型、Guttman λ6 或潜类模型。然而,还没有实证研究调查过哪种方法能最好地估计单项评估的可靠性。本研究通过模拟研究来探讨这一问题。为了表现实际中的评估情况,模拟研究改变了几个方面:项目区分度参数、同一建构的多项目评估的测试长度、样本量以及单项目评估与同一建构的多项目评估之间的相关性。结果表明,同时使用基于双单调模型的方法和基于衰减校正的方法,研究人员可以在 94.44% 的情况下获得单项测评信度范围的最精确估计值。同一建构的多项目测评的测验长度、项目辨别参数、样本量以及同一建构的单项目测评与多项目测评之间的相关性均不影响方法的选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Frontiers in Psychology
Frontiers in Psychology PSYCHOLOGY, MULTIDISCIPLINARY-
CiteScore
5.30
自引率
13.20%
发文量
7396
审稿时长
14 weeks
期刊介绍: Frontiers in Psychology is the largest journal in its field, publishing rigorously peer-reviewed research across the psychological sciences, from clinical research to cognitive science, from perception to consciousness, from imaging studies to human factors, and from animal cognition to social psychology. Field Chief Editor Axel Cleeremans at the Free University of Brussels is supported by an outstanding Editorial Board of international researchers. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics, clinicians and the public worldwide. The journal publishes the best research across the entire field of psychology. Today, psychological science is becoming increasingly important at all levels of society, from the treatment of clinical disorders to our basic understanding of how the mind works. It is highly interdisciplinary, borrowing questions from philosophy, methods from neuroscience and insights from clinical practice - all in the goal of furthering our grasp of human nature and society, as well as our ability to develop new intervention methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信