Robert W. Loy, Neil D. Christiansen, Robert P. Tett, Katherine Klein, Margaret Toich
{"title":"低风险和高风险就业环境下人格测试效度的差异","authors":"Robert W. Loy, Neil D. Christiansen, Robert P. Tett, Katherine Klein, Margaret Toich","doi":"10.1111/ijsa.70018","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The impact of applicant faking on personality test validity in high-stakes settings remains debated in personnel selection research, with some arguing it distorts scores while others suggest minimal effects on validity. This meta-analysis compares personality test validity across low-stakes (e.g., employee assessments) and high-stakes (e.g., applicant testing) settings. Results show validity was consistently higher in low-stakes settings across both unmatched and matched samples. In unmatched studies, personality test validity was higher in low-stakes settings (<i>r'</i> = 0.17, <i>k</i> = 20, <i>N</i> = 8883) than in high-stakes settings (<i>r'</i> = 0.13, <i>k</i> = 215, N = 68,372). Matched studies showed a substantial difference, where low-stakes validity (<i>r'</i> = 0.27) was 125% larger than high-stakes validity (<i>r'</i> = 0.12). These findings provide strong empirical evidence that faking substantially reduces personality test validity in selection contexts. We recommend organizations treat low-stakes validity evidence as provisional and use it only for interim hiring decisions until high-stakes validation data is available. To improve selection accuracy, organizations should prioritize validation studies in motivated samples, apply statistical corrections for faking, and implement faking-resistant measures (e.g., forced-choice formats).</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 3","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Personality Test Validity Differs Between Low-Stakes and High-Stakes Employment Settings\",\"authors\":\"Robert W. Loy, Neil D. Christiansen, Robert P. Tett, Katherine Klein, Margaret Toich\",\"doi\":\"10.1111/ijsa.70018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>The impact of applicant faking on personality test validity in high-stakes settings remains debated in personnel selection research, with some arguing it distorts scores while others suggest minimal effects on validity. This meta-analysis compares personality test validity across low-stakes (e.g., employee assessments) and high-stakes (e.g., applicant testing) settings. Results show validity was consistently higher in low-stakes settings across both unmatched and matched samples. In unmatched studies, personality test validity was higher in low-stakes settings (<i>r'</i> = 0.17, <i>k</i> = 20, <i>N</i> = 8883) than in high-stakes settings (<i>r'</i> = 0.13, <i>k</i> = 215, N = 68,372). Matched studies showed a substantial difference, where low-stakes validity (<i>r'</i> = 0.27) was 125% larger than high-stakes validity (<i>r'</i> = 0.12). These findings provide strong empirical evidence that faking substantially reduces personality test validity in selection contexts. We recommend organizations treat low-stakes validity evidence as provisional and use it only for interim hiring decisions until high-stakes validation data is available. To improve selection accuracy, organizations should prioritize validation studies in motivated samples, apply statistical corrections for faking, and implement faking-resistant measures (e.g., forced-choice formats).</p></div>\",\"PeriodicalId\":51465,\"journal\":{\"name\":\"International Journal of Selection and Assessment\",\"volume\":\"33 3\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Selection and Assessment\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/ijsa.70018\",\"RegionNum\":4,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MANAGEMENT\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Selection and Assessment","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ijsa.70018","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
摘要
在高风险环境下,应聘者作假对人格测试效度的影响在人事选择研究中仍存在争议,一些人认为它会扭曲分数,而另一些人则认为对效度的影响很小。本荟萃分析比较了低风险(例如,员工评估)和高风险(例如,申请人测试)设置下的人格测试有效性。结果表明,在不匹配和匹配样本的低风险设置中,有效性始终较高。在未匹配的研究中,低风险设置的人格测试效度(r' = 0.17, k = 20, N = 8883)高于高风险设置(r' = 0.13, k = 215, N = 68,372)。匹配的研究显示了显著的差异,其中低风险效度(r' = 0.27)比高风险效度(r' = 0.12)大125%。这些发现提供了强有力的经验证据,证明在选择情境下,作假会大大降低人格测试的有效性。我们建议组织将低风险有效性证据视为临时证据,在获得高风险有效性数据之前,仅将其用于临时招聘决策。为了提高选择的准确性,组织应该在有动机的样本中优先考虑验证研究,对伪造应用统计修正,并实施防止伪造的措施(例如,强制选择格式)。
Personality Test Validity Differs Between Low-Stakes and High-Stakes Employment Settings
The impact of applicant faking on personality test validity in high-stakes settings remains debated in personnel selection research, with some arguing it distorts scores while others suggest minimal effects on validity. This meta-analysis compares personality test validity across low-stakes (e.g., employee assessments) and high-stakes (e.g., applicant testing) settings. Results show validity was consistently higher in low-stakes settings across both unmatched and matched samples. In unmatched studies, personality test validity was higher in low-stakes settings (r' = 0.17, k = 20, N = 8883) than in high-stakes settings (r' = 0.13, k = 215, N = 68,372). Matched studies showed a substantial difference, where low-stakes validity (r' = 0.27) was 125% larger than high-stakes validity (r' = 0.12). These findings provide strong empirical evidence that faking substantially reduces personality test validity in selection contexts. We recommend organizations treat low-stakes validity evidence as provisional and use it only for interim hiring decisions until high-stakes validation data is available. To improve selection accuracy, organizations should prioritize validation studies in motivated samples, apply statistical corrections for faking, and implement faking-resistant measures (e.g., forced-choice formats).
期刊介绍:
The International Journal of Selection and Assessment publishes original articles related to all aspects of personnel selection, staffing, and assessment in organizations. Using an effective combination of academic research with professional-led best practice, IJSA aims to develop new knowledge and understanding in these important areas of work psychology and contemporary workforce management.