{"title":"Improving Fairness in Credit Lending Models using Subgroup Threshold Optimization","authors":"Cecilia Ying, Stephen Thomas","doi":"arxiv-2403.10652","DOIUrl":null,"url":null,"abstract":"In an effort to improve the accuracy of credit lending decisions, many\nfinancial intuitions are now using predictions from machine learning models.\nWhile such predictions enjoy many advantages, recent research has shown that\nthe predictions have the potential to be biased and unfair towards certain\nsubgroups of the population. To combat this, several techniques have been\nintroduced to help remove the bias and improve the overall fairness of the\npredictions. We introduce a new fairness technique, called \\textit{Subgroup\nThreshold Optimizer} (\\textit{STO}), that does not require any alternations to\nthe input training data nor does it require any changes to the underlying\nmachine learning algorithm, and thus can be used with any existing machine\nlearning pipeline. STO works by optimizing the classification thresholds for\nindividual subgroups in order to minimize the overall discrimination score\nbetween them. Our experiments on a real-world credit lending dataset show that\nSTO can reduce gender discrimination by over 90\\%.","PeriodicalId":501128,"journal":{"name":"arXiv - QuantFin - Risk Management","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Risk Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.10652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In an effort to improve the accuracy of credit lending decisions, many
financial intuitions are now using predictions from machine learning models.
While such predictions enjoy many advantages, recent research has shown that
the predictions have the potential to be biased and unfair towards certain
subgroups of the population. To combat this, several techniques have been
introduced to help remove the bias and improve the overall fairness of the
predictions. We introduce a new fairness technique, called \textit{Subgroup
Threshold Optimizer} (\textit{STO}), that does not require any alternations to
the input training data nor does it require any changes to the underlying
machine learning algorithm, and thus can be used with any existing machine
learning pipeline. STO works by optimizing the classification thresholds for
individual subgroups in order to minimize the overall discrimination score
between them. Our experiments on a real-world credit lending dataset show that
STO can reduce gender discrimination by over 90\%.