A comparison of the response-pattern-based faking detection methods.

IF 9.4 1区 心理学 Q1 MANAGEMENT
Weiwen Nie,Ivan Hernandez,Louis Tay,Bo Zhang,Mengyang Cao
{"title":"A comparison of the response-pattern-based faking detection methods.","authors":"Weiwen Nie,Ivan Hernandez,Louis Tay,Bo Zhang,Mengyang Cao","doi":"10.1037/apl0001261","DOIUrl":null,"url":null,"abstract":"The covariance index method, the idiosyncratic item response method, and the machine learning method are the three primary response-pattern-based (RPB) approaches to detect faking on personality tests. However, less is known about how their performance is affected by different practical factors (e.g., scale length, training sample size, proportion of faking participants) and when they perform optimally. In the present study, we systematically compared the three RPB faking detection methods across different conditions in three empirical-data-based resampling studies. Overall, we found that the machine learning method outperforms the other two RPB faking detection methods in most simulation conditions. It was also found that the faking probabilities produced by all three RPB faking detection methods had moderate to strong positive correlations with true personality scores, suggesting that these RPB faking detection methods are likely to misclassify honest respondents with truly high personality trait scores as fakers. Fortunately, we found that the benefit of removing suspicious fakers still outweighs the consequences of misclassification. Finally, we provided practical guidance to researchers and practitioners to optimally implement the machine learning method and offered step-by-step code. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":15135,"journal":{"name":"Journal of Applied Psychology","volume":"74 1","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/apl0001261","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0

Abstract

The covariance index method, the idiosyncratic item response method, and the machine learning method are the three primary response-pattern-based (RPB) approaches to detect faking on personality tests. However, less is known about how their performance is affected by different practical factors (e.g., scale length, training sample size, proportion of faking participants) and when they perform optimally. In the present study, we systematically compared the three RPB faking detection methods across different conditions in three empirical-data-based resampling studies. Overall, we found that the machine learning method outperforms the other two RPB faking detection methods in most simulation conditions. It was also found that the faking probabilities produced by all three RPB faking detection methods had moderate to strong positive correlations with true personality scores, suggesting that these RPB faking detection methods are likely to misclassify honest respondents with truly high personality trait scores as fakers. Fortunately, we found that the benefit of removing suspicious fakers still outweighs the consequences of misclassification. Finally, we provided practical guidance to researchers and practitioners to optimally implement the machine learning method and offered step-by-step code. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
基于响应模式的伪造检测方法的比较。
协方差指数法、特质项目反应法和机器学习法是三种主要的基于反应模式的人格测试作弊检测方法。然而,对于他们的表现如何受到不同实际因素(例如,量表长度、训练样本量、虚假参与者的比例)的影响以及他们何时表现最佳,人们知之甚少。在本研究中,我们在三个基于经验数据的重采样研究中系统地比较了三种RPB伪造检测方法在不同条件下的应用。总的来说,我们发现机器学习方法在大多数模拟条件下优于其他两种RPB伪造检测方法。研究还发现,三种RPB造假检测方法产生的造假概率与真实人格得分呈中等到强烈的正相关,这表明这些RPB造假检测方法可能会将真实人格特质得分较高的诚实受访者错误地归类为造假者。幸运的是,我们发现清除可疑伪造者的好处仍然大于错误分类的后果。最后,我们为研究人员和实践者提供了实践指导,以最佳地实现机器学习方法,并提供了一步一步的代码。(PsycInfo Database Record (c) 2025 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
17.60
自引率
6.10%
发文量
175
期刊介绍: The Journal of Applied Psychology® focuses on publishing original investigations that contribute new knowledge and understanding to fields of applied psychology (excluding clinical and applied experimental or human factors, which are better suited for other APA journals). The journal primarily considers empirical and theoretical investigations that enhance understanding of cognitive, motivational, affective, and behavioral psychological phenomena in work and organizational settings. These phenomena can occur at individual, group, organizational, or cultural levels, and in various work settings such as business, education, training, health, service, government, or military institutions. The journal welcomes submissions from both public and private sector organizations, for-profit or nonprofit. It publishes several types of articles, including: 1.Rigorously conducted empirical investigations that expand conceptual understanding (original investigations or meta-analyses). 2.Theory development articles and integrative conceptual reviews that synthesize literature and generate new theories on psychological phenomena to stimulate novel research. 3.Rigorously conducted qualitative research on phenomena that are challenging to capture with quantitative methods or require inductive theory building.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信