{"title":"Enhancing gender equity in resume job matching via debiasing-assisted deep generative model and gender-weighted sampling","authors":"Swati Tyagi , Anuj , Wei Qian , Jiaheng Xie , Rick Andrews","doi":"10.1016/j.jjimei.2024.100283","DOIUrl":null,"url":null,"abstract":"<div><div>Our work aims to mitigate gender bias within word embeddings and investigates the effects of these adjustments on enhancing fairness in resume job-matching problems. By conducting a case study on resume data, we explore the prevalence of gender bias in job categorization—a significant barrier to equal career opportunities, particularly in the context of machine learning applications. This study scrutinizes how biased representations in job assignments, influenced by a variety of factors such as skills and resume descriptors within diverse semantic frameworks, affect the classification process. The investigation extends to the nuanced language of resumes and the presence of subtle gender biases, including the employment of gender-associated terms, and examines how these terms’ vector representations can skew fairness, leading to a disproportionate mapping of resumes to job categories based on gender.</div><div>Our findings reveal a significant correlation between gender discrepancies in classification true positive rate and gender imbalances across professions that potentially deepen these disparities. The goal of this study is to (1) mitigate bias at the level of word embeddings via a debiasing-assisted deep generative modeling approach, thereby fostering more equitable and gender-fair vector representations; (2) evaluate the resultant impact on the fairness of job classification; (3) explore the implementation of a gender-weighted sampling technique to achieve a more balanced representation of genders across various job categories when such an imbalance exists. This approach involves modifying the data distribution according to gender before it is input into the classifier model, aiming to ensure equal opportunity and promote gender fairness in occupational classifications. The code for this paper is publicly available on <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100699,"journal":{"name":"International Journal of Information Management Data Insights","volume":"4 2","pages":"Article 100283"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Management Data Insights","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667096824000727","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Our work aims to mitigate gender bias within word embeddings and investigates the effects of these adjustments on enhancing fairness in resume job-matching problems. By conducting a case study on resume data, we explore the prevalence of gender bias in job categorization—a significant barrier to equal career opportunities, particularly in the context of machine learning applications. This study scrutinizes how biased representations in job assignments, influenced by a variety of factors such as skills and resume descriptors within diverse semantic frameworks, affect the classification process. The investigation extends to the nuanced language of resumes and the presence of subtle gender biases, including the employment of gender-associated terms, and examines how these terms’ vector representations can skew fairness, leading to a disproportionate mapping of resumes to job categories based on gender.
Our findings reveal a significant correlation between gender discrepancies in classification true positive rate and gender imbalances across professions that potentially deepen these disparities. The goal of this study is to (1) mitigate bias at the level of word embeddings via a debiasing-assisted deep generative modeling approach, thereby fostering more equitable and gender-fair vector representations; (2) evaluate the resultant impact on the fairness of job classification; (3) explore the implementation of a gender-weighted sampling technique to achieve a more balanced representation of genders across various job categories when such an imbalance exists. This approach involves modifying the data distribution according to gender before it is input into the classifier model, aiming to ensure equal opportunity and promote gender fairness in occupational classifications. The code for this paper is publicly available on GitHub.