Louis Hickman, Christopher Huynh, Jessica Gass, Brandon Booth, Jason Kuruzovich, Louis Tay
{"title":"偏见去哪儿,我就去哪儿:对减少算法偏差的综合、系统回顾。","authors":"Louis Hickman, Christopher Huynh, Jessica Gass, Brandon Booth, Jason Kuruzovich, Louis Tay","doi":"10.1037/apl0001255","DOIUrl":null,"url":null,"abstract":"<p><p>Machine learning (ML) models are increasingly used for personnel assessment and selection (e.g., resume screeners, automatically scored interviews). However, concerns have been raised throughout society that ML assessments may be biased and perpetuate or exacerbate inequality. Although organizational researchers have begun investigating ML assessments from traditional psychometric and legal perspectives, there is a need to understand, clarify, and integrate fairness operationalizations and algorithmic bias mitigation methods from the computer science, data science, and organizational research literature. We present a four-stage model of developing ML assessments and applying bias mitigation methods, including (a) generating the training data, (b) training the model, (c) testing the model, and (d) deploying the model. When introducing the four-stage model, we describe potential sources of bias and unfairness at each stage. Then, we systematically review definitions and operationalizations of algorithmic bias, legal requirements governing personnel selection from the United States and Europe, and research on algorithmic bias mitigation across multiple domains and integrate these findings into our framework. Our review provides insights for both research and practice by elucidating possible mechanisms of algorithmic bias while identifying which bias mitigation methods are legal and effective. This integrative framework also reveals gaps in the knowledge of algorithmic bias mitigation that should be addressed by future collaborative research between organizational researchers, computer scientists, and data scientists. We provide recommendations for developing and deploying ML assessments, as well as recommendations for future research into algorithmic bias and fairness. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":15135,"journal":{"name":"Journal of Applied Psychology","volume":" ","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Whither bias goes, I will go: An integrative, systematic review of algorithmic bias mitigation.\",\"authors\":\"Louis Hickman, Christopher Huynh, Jessica Gass, Brandon Booth, Jason Kuruzovich, Louis Tay\",\"doi\":\"10.1037/apl0001255\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Machine learning (ML) models are increasingly used for personnel assessment and selection (e.g., resume screeners, automatically scored interviews). However, concerns have been raised throughout society that ML assessments may be biased and perpetuate or exacerbate inequality. Although organizational researchers have begun investigating ML assessments from traditional psychometric and legal perspectives, there is a need to understand, clarify, and integrate fairness operationalizations and algorithmic bias mitigation methods from the computer science, data science, and organizational research literature. We present a four-stage model of developing ML assessments and applying bias mitigation methods, including (a) generating the training data, (b) training the model, (c) testing the model, and (d) deploying the model. When introducing the four-stage model, we describe potential sources of bias and unfairness at each stage. Then, we systematically review definitions and operationalizations of algorithmic bias, legal requirements governing personnel selection from the United States and Europe, and research on algorithmic bias mitigation across multiple domains and integrate these findings into our framework. Our review provides insights for both research and practice by elucidating possible mechanisms of algorithmic bias while identifying which bias mitigation methods are legal and effective. This integrative framework also reveals gaps in the knowledge of algorithmic bias mitigation that should be addressed by future collaborative research between organizational researchers, computer scientists, and data scientists. We provide recommendations for developing and deploying ML assessments, as well as recommendations for future research into algorithmic bias and fairness. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>\",\"PeriodicalId\":15135,\"journal\":{\"name\":\"Journal of Applied Psychology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":9.4000,\"publicationDate\":\"2024-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/apl0001255\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MANAGEMENT\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/apl0001255","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
摘要
机器学习(ML)模型越来越多地用于人员评估和选择(例如,简历筛选,自动评分面试)。然而,整个社会都担心机器学习评估可能存在偏见,并使不平等现象长期存在或加剧。尽管组织研究人员已经开始从传统的心理测量学和法律角度研究机器学习评估,但仍需要理解、澄清和整合来自计算机科学、数据科学和组织研究文献的公平操作化和算法偏见缓解方法。我们提出了一个开发机器学习评估和应用偏见缓解方法的四阶段模型,包括(a)生成训练数据,(b)训练模型,(c)测试模型,以及(d)部署模型。在介绍四阶段模型时,我们描述了每个阶段的偏见和不公平的潜在来源。然后,我们系统地回顾了算法偏见的定义和操作,美国和欧洲人员选择的法律要求,以及跨多个领域的算法偏见缓解研究,并将这些发现整合到我们的框架中。我们的综述通过阐明算法偏差的可能机制,同时确定哪些减轻偏差的方法是合法和有效的,为研究和实践提供了见解。这一综合框架还揭示了在算法偏见缓解方面的知识差距,这些差距应该通过组织研究人员、计算机科学家和数据科学家之间的未来合作研究来解决。我们提供了开发和部署机器学习评估的建议,以及对算法偏见和公平性的未来研究的建议。(PsycInfo Database Record (c) 2025 APA,版权所有)。
Whither bias goes, I will go: An integrative, systematic review of algorithmic bias mitigation.
Machine learning (ML) models are increasingly used for personnel assessment and selection (e.g., resume screeners, automatically scored interviews). However, concerns have been raised throughout society that ML assessments may be biased and perpetuate or exacerbate inequality. Although organizational researchers have begun investigating ML assessments from traditional psychometric and legal perspectives, there is a need to understand, clarify, and integrate fairness operationalizations and algorithmic bias mitigation methods from the computer science, data science, and organizational research literature. We present a four-stage model of developing ML assessments and applying bias mitigation methods, including (a) generating the training data, (b) training the model, (c) testing the model, and (d) deploying the model. When introducing the four-stage model, we describe potential sources of bias and unfairness at each stage. Then, we systematically review definitions and operationalizations of algorithmic bias, legal requirements governing personnel selection from the United States and Europe, and research on algorithmic bias mitigation across multiple domains and integrate these findings into our framework. Our review provides insights for both research and practice by elucidating possible mechanisms of algorithmic bias while identifying which bias mitigation methods are legal and effective. This integrative framework also reveals gaps in the knowledge of algorithmic bias mitigation that should be addressed by future collaborative research between organizational researchers, computer scientists, and data scientists. We provide recommendations for developing and deploying ML assessments, as well as recommendations for future research into algorithmic bias and fairness. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
The Journal of Applied Psychology® focuses on publishing original investigations that contribute new knowledge and understanding to fields of applied psychology (excluding clinical and applied experimental or human factors, which are better suited for other APA journals). The journal primarily considers empirical and theoretical investigations that enhance understanding of cognitive, motivational, affective, and behavioral psychological phenomena in work and organizational settings. These phenomena can occur at individual, group, organizational, or cultural levels, and in various work settings such as business, education, training, health, service, government, or military institutions. The journal welcomes submissions from both public and private sector organizations, for-profit or nonprofit. It publishes several types of articles, including:
1.Rigorously conducted empirical investigations that expand conceptual understanding (original investigations or meta-analyses).
2.Theory development articles and integrative conceptual reviews that synthesize literature and generate new theories on psychological phenomena to stimulate novel research.
3.Rigorously conducted qualitative research on phenomena that are challenging to capture with quantitative methods or require inductive theory building.