{"title":"信用评分模型的公平性","authors":"Christophe Hurlin, C. Pérignon, Sébastien Saurin","doi":"10.2139/ssrn.3785882","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence (AI) can systematically treat unfavorably a group of individuals sharing a protected attribute (e.g. gender, age, race). In credit scoring applications, this lack of fairness can severely distort access to credit and expose AI-enabled financial institutions to legal and reputational risks. In this paper, we develop a unified framework assessing the fairness of AI algorithms used in credit markets. First, we propose an inference procedure to test various fairness metrics. Second, we present an interpretability technique, called Fairness Partial Dependence Plot, to identify the source(s) of the lack of fairness and mitigate fairness concerns. We illustrate the efficiency of our framework using a dataset of consumer loans and a series of machine-learning algorithms.","PeriodicalId":129815,"journal":{"name":"Microeconomics: Welfare Economics & Collective Decision-Making eJournal","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"The Fairness of Credit Scoring Models\",\"authors\":\"Christophe Hurlin, C. Pérignon, Sébastien Saurin\",\"doi\":\"10.2139/ssrn.3785882\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial Intelligence (AI) can systematically treat unfavorably a group of individuals sharing a protected attribute (e.g. gender, age, race). In credit scoring applications, this lack of fairness can severely distort access to credit and expose AI-enabled financial institutions to legal and reputational risks. In this paper, we develop a unified framework assessing the fairness of AI algorithms used in credit markets. First, we propose an inference procedure to test various fairness metrics. Second, we present an interpretability technique, called Fairness Partial Dependence Plot, to identify the source(s) of the lack of fairness and mitigate fairness concerns. We illustrate the efficiency of our framework using a dataset of consumer loans and a series of machine-learning algorithms.\",\"PeriodicalId\":129815,\"journal\":{\"name\":\"Microeconomics: Welfare Economics & Collective Decision-Making eJournal\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-02-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Microeconomics: Welfare Economics & Collective Decision-Making eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3785882\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microeconomics: Welfare Economics & Collective Decision-Making eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3785882","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial Intelligence (AI) can systematically treat unfavorably a group of individuals sharing a protected attribute (e.g. gender, age, race). In credit scoring applications, this lack of fairness can severely distort access to credit and expose AI-enabled financial institutions to legal and reputational risks. In this paper, we develop a unified framework assessing the fairness of AI algorithms used in credit markets. First, we propose an inference procedure to test various fairness metrics. Second, we present an interpretability technique, called Fairness Partial Dependence Plot, to identify the source(s) of the lack of fairness and mitigate fairness concerns. We illustrate the efficiency of our framework using a dataset of consumer loans and a series of machine-learning algorithms.