低风险和高风险就业环境下人格测试效度的差异

IF 2.4 4区 管理学 Q3 MANAGEMENT
Robert W. Loy, Neil D. Christiansen, Robert P. Tett, Katherine Klein, Margaret Toich
{"title":"低风险和高风险就业环境下人格测试效度的差异","authors":"Robert W. Loy,&nbsp;Neil D. Christiansen,&nbsp;Robert P. Tett,&nbsp;Katherine Klein,&nbsp;Margaret Toich","doi":"10.1111/ijsa.70018","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The impact of applicant faking on personality test validity in high-stakes settings remains debated in personnel selection research, with some arguing it distorts scores while others suggest minimal effects on validity. This meta-analysis compares personality test validity across low-stakes (e.g., employee assessments) and high-stakes (e.g., applicant testing) settings. Results show validity was consistently higher in low-stakes settings across both unmatched and matched samples. In unmatched studies, personality test validity was higher in low-stakes settings (<i>r'</i> = 0.17, <i>k</i> = 20, <i>N</i> = 8883) than in high-stakes settings (<i>r'</i> = 0.13, <i>k</i> = 215, N = 68,372). Matched studies showed a substantial difference, where low-stakes validity (<i>r'</i> = 0.27) was 125% larger than high-stakes validity (<i>r'</i> = 0.12). These findings provide strong empirical evidence that faking substantially reduces personality test validity in selection contexts. We recommend organizations treat low-stakes validity evidence as provisional and use it only for interim hiring decisions until high-stakes validation data is available. To improve selection accuracy, organizations should prioritize validation studies in motivated samples, apply statistical corrections for faking, and implement faking-resistant measures (e.g., forced-choice formats).</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 3","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Personality Test Validity Differs Between Low-Stakes and High-Stakes Employment Settings\",\"authors\":\"Robert W. Loy,&nbsp;Neil D. Christiansen,&nbsp;Robert P. Tett,&nbsp;Katherine Klein,&nbsp;Margaret Toich\",\"doi\":\"10.1111/ijsa.70018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>The impact of applicant faking on personality test validity in high-stakes settings remains debated in personnel selection research, with some arguing it distorts scores while others suggest minimal effects on validity. This meta-analysis compares personality test validity across low-stakes (e.g., employee assessments) and high-stakes (e.g., applicant testing) settings. Results show validity was consistently higher in low-stakes settings across both unmatched and matched samples. In unmatched studies, personality test validity was higher in low-stakes settings (<i>r'</i> = 0.17, <i>k</i> = 20, <i>N</i> = 8883) than in high-stakes settings (<i>r'</i> = 0.13, <i>k</i> = 215, N = 68,372). Matched studies showed a substantial difference, where low-stakes validity (<i>r'</i> = 0.27) was 125% larger than high-stakes validity (<i>r'</i> = 0.12). These findings provide strong empirical evidence that faking substantially reduces personality test validity in selection contexts. We recommend organizations treat low-stakes validity evidence as provisional and use it only for interim hiring decisions until high-stakes validation data is available. To improve selection accuracy, organizations should prioritize validation studies in motivated samples, apply statistical corrections for faking, and implement faking-resistant measures (e.g., forced-choice formats).</p></div>\",\"PeriodicalId\":51465,\"journal\":{\"name\":\"International Journal of Selection and Assessment\",\"volume\":\"33 3\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Selection and Assessment\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/ijsa.70018\",\"RegionNum\":4,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MANAGEMENT\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Selection and Assessment","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ijsa.70018","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0

摘要

在高风险环境下,应聘者作假对人格测试效度的影响在人事选择研究中仍存在争议,一些人认为它会扭曲分数,而另一些人则认为对效度的影响很小。本荟萃分析比较了低风险(例如,员工评估)和高风险(例如,申请人测试)设置下的人格测试有效性。结果表明,在不匹配和匹配样本的低风险设置中,有效性始终较高。在未匹配的研究中,低风险设置的人格测试效度(r' = 0.17, k = 20, N = 8883)高于高风险设置(r' = 0.13, k = 215, N = 68,372)。匹配的研究显示了显著的差异,其中低风险效度(r' = 0.27)比高风险效度(r' = 0.12)大125%。这些发现提供了强有力的经验证据,证明在选择情境下,作假会大大降低人格测试的有效性。我们建议组织将低风险有效性证据视为临时证据,在获得高风险有效性数据之前,仅将其用于临时招聘决策。为了提高选择的准确性,组织应该在有动机的样本中优先考虑验证研究,对伪造应用统计修正,并实施防止伪造的措施(例如,强制选择格式)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Personality Test Validity Differs Between Low-Stakes and High-Stakes Employment Settings

The impact of applicant faking on personality test validity in high-stakes settings remains debated in personnel selection research, with some arguing it distorts scores while others suggest minimal effects on validity. This meta-analysis compares personality test validity across low-stakes (e.g., employee assessments) and high-stakes (e.g., applicant testing) settings. Results show validity was consistently higher in low-stakes settings across both unmatched and matched samples. In unmatched studies, personality test validity was higher in low-stakes settings (r' = 0.17, k = 20, N = 8883) than in high-stakes settings (r' = 0.13, k = 215, N = 68,372). Matched studies showed a substantial difference, where low-stakes validity (r' = 0.27) was 125% larger than high-stakes validity (r' = 0.12). These findings provide strong empirical evidence that faking substantially reduces personality test validity in selection contexts. We recommend organizations treat low-stakes validity evidence as provisional and use it only for interim hiring decisions until high-stakes validation data is available. To improve selection accuracy, organizations should prioritize validation studies in motivated samples, apply statistical corrections for faking, and implement faking-resistant measures (e.g., forced-choice formats).

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.10
自引率
31.80%
发文量
46
期刊介绍: The International Journal of Selection and Assessment publishes original articles related to all aspects of personnel selection, staffing, and assessment in organizations. Using an effective combination of academic research with professional-led best practice, IJSA aims to develop new knowledge and understanding in these important areas of work psychology and contemporary workforce management.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信